CEC

Social top

English

TIES 2014 — Toronto International Electroacoustic Symposium

Abstracts and biographies

image
Ricardo Coelho de Souza performing David Ikard’s Água Eletrônica (2013), for water percussion and live electronics, during the Toronto Electroacoustic Symposium at Theatre Direct’s Wychwood Theatre on 15 August 2013. [Click image to enlarge]

TIES 2014 is a co-presentation of the Canadian Electroacoustic Community (CEC) and New Adventures in Sound Art (NAISA) in collaboration with the Canadian Music Centre (CMC). TIES is held in parallel with the 16th edition of Sound Travels, NAISA’s annual Festival of Sound Art. The Keynote Speaker for TIES 2014 is Pauline Oliveros.

Activities take place at the Canadian Music Centre (Thursday and Friday morning sessions) and at the Artscape Wychwood Barns (all concerts and afternoon sessions, all activities on Saturday and Sunday). Inside the Artscape Wychwood Barns, there are two venues: Theatre Direct’s Wychwood Theatre (Studio 176) and the NAISA Space (Studio 252).

Registration — includes entry to all concerts [ register now ]
Webcast — Listen in to all events on a live stream.

Questions about the schedule or any other aspect of the symposium can be directed by email to Emilie LeBel, Chair of the symposium committee. For any registration or Sound Travels questions, contact Nadene Thériault-Copeland.

Call for Submissions
Schedule Summary | Directions | Organisation
Detailed Symposium Schedule
Abstracts + Bios | Programme Notes + Bios
TIES 2014 event / info summary (PDF)

Day 2 — Thursday 14 August

Click here to return to the symposium schedule.

09:30–11:00 • Session #1: Software and EA Tools

Venue: Canadian Music Centre
Chair: Kevin Austin

A Multi-Touch Gesture Recording and Manipulation System for Musical Interfaces
by Lawrence Fyfe

The near-ubiquitous availability of multi-touch devices — from phones to tablets — has inspired composers, musicians and programmers to create touch-based musical interfaces. In building these interfaces, creators develop multi-touch gestures and map them to musical gestures. Those musical gestures can be recorded and the audio can be manipulated in real-time or as recorded audio files. But what if developers could build interfaces in which the multi-touch gestures themselves could be recorded and manipulated like audio files? To answer this question, the JunctionBox interaction toolkit for multi-touch devices now has a gesture recording and manipulation system with the ability to record, play back, loop, layer and time-scale multi-touch interactions that map to musical gestures. Once a gesture is recorded, it can be played back as many times as desired with the option of looping playback. New gestures can be recorded over existing gestures with both played back, allowing for gesture layering. After gestures have been recorded, they can be time-scaled, making the gestures either shorter or longer in time. Overall, these new features in JunctionBox allow multi-touch gestures to be recorded and manipulated much like audio files.

A number of tools for musicians implement seemingly similar functionality. Different Strokes is a music composition and performance system by Mark Zadel and Gary Scavone that enables freehand drawing gestures with a mouse to be recorded, looped and played back. JunctionBox offers the same functionality as Different Strokes but because it is more flexible, JunctionBox can be used for a much greater range of gestures than freehand drawing. GDIF is a scheme that uses Open Sound Control (OSC) messages to encode gesture data. JunctionBox maps touch interactions to arbitrary OSC messages and could use GDIF messages. Therefore, GDIF should be considered complementary to the JunctionBox gesture recording system.

JunctionBox has a flexible approach to defining gestures, allowing for a range of gestures from the automation of interface widgets to directly drawing musical gestures with touch. A gesture is simply defined as a series of touch locations and the time between touches. This definition increases flexibility and allows essentially any action to be a gesture. This is in contrast to gesture recognition systems that only recognize predefined actions. JunctionBox allows performers to leverage this power without any imposed ideologies or ontologies of gesture or performance, allowing users to personalize and customize their software interfaces to their exact needs and ideas.

Lawrence Fyfe is an arts researcher, interaction designer, laptop performer and creative coder. He is pursuing a PhD at the University of Calgary in Computational Media Design, an interdisciplinary programme that combines computer science, design and the arts. Before starting his PhD, he earned a Master’s degree in Music, Science, and Technology at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University in 2008. His research is focused on the design and building of software toolkits that enable creative applications. More specifically, he designs and develops music performance systems with a particular interest in multi-touch and networked instruments.

Flocking.js: JavaScript Audio for Artists
by Colin Clark

Flocking is a web-based framework for audio synthesis, composition and performance. Written entirely in JavaScript, Flocking applications can run on a variety of platforms, including traditional desktop operating systems as well as mobile platforms. Flocking is also used by the author to create sound installations and compositions on embedded platforms such as the popular Raspberry Pi computer using Node.js. This paper will position Flocking within the emerging context of web audio tools and techniques; several recent pieces composed with it will be demonstrated.

Unlike traditional music frameworks, Flocking is based on a declarative approach that represents instruments and compositions as data rather than code. In Flocking, synths and scores alike are composed from collections of unit generators that are specified as trees of JavaScript Object Notation (JSON) documents. Flocking’s declarative approach represents a form of meta-programming that can support the creation of programs that are able to understand and transform the structure and content of an instrument or musical algorithm. For example, the author is working on graphical tools for creating and editing instruments built with Flocking. The goal is to enable musicians who are more comfortable with visual flow environments such as Max/MSP to collaborate directly with developers who prefer traditional text-based computer music programming.

Flocking provides a flexible means for reusing and adapting instruments and compositions, allowing musicians to overlay their own customizations onto an existing instrument without having to directly change its internals. This helps to promote the sharing of modules amongst a body of compositions and across communities of digital instrument designers.

Recently, support has been added for Open Sound Control, specifically to handle hardware controllers. An example, dubbed the Flock Box, will be demonstrated. The Flock Box consists of four potentiometers connected to a Teensy embedded controller that issues OSC value changes each time a pot is moved. A Node.js server listens for OSC messages and forwards them to Flocking. This data can be easily mapped to arbitrary inputs within an instrument’s unit generator tree using a declarative input map specification.

Flocking has proved to be highly germane to the author’s own musical practice, supporting an integral conversation between the composition process and the emerging technical capabilities of the system. It has promoted a compositional approach that embraces constraints and an æsthetic of making the most out of small, composable building blocks. Flocking is a viable musical tool for composers and sound artists working in a variety of media and styles, and is available on Github for use and modification under a liberal open source license.

Colin Clark is a composer, video artist and software developer living in Toronto. He is the creator of Flocking, a JavaScript framework for web-based creative audio coding. Colin is currently the Lead Software Architect at OCAD University’s Inclusive Design Research Centre and is a recognized leader in the field of inclusive design and software architecture. His music has been performed by Arraymusic, the neither/nor collective, the Draperies and by his own ensembles, Lions and Fleischmop. Colin’s soundtracks for experimental films by Izabella Pruska-Oldenhof and R. Bruce Elder have been shown at film festivals and cinemathèques internationally. He is currently working towards a MFA in Interdisciplinary Art, Media and Design at OCAD University.
http://fluidproject.org

Tools for Sound Spatialization Developed at Université de Montréal
by Robert Normandeau

Several spatialization tools have been developed in recent years at the Université de Montréal that are integrated into an audio sequencer in order to allow composers to work with space during the entire compositional process. This is in contrast to the typical approach taken by electroacoustic composers, where autonomous spatialization tools force the composer to first compose the work and only once the work is complete to consider how the work can be made to inhabit a performance space.

The Octogris and ZirkOSC are spatialization plug-ins developed for Mac to manage octophony in 2D and to control the Zirkonium in 3D, respectively.

Development of Octogris came out of the need for a 2D spatialization tool integrated in a Macintosh audio sequencers (Digital Performer, Logic or Reaper, as opposed to a dedicated autonomous tool). The first version was released in 2010, about a year after it was first presented at TES 2009. The latest version (November 2013) has a few corrections from the first version, is 64-bit (and is 32-bit compatible) and can be used with an integrated joystick with Automap, which allows for automation of movement.

Version 2 of Octogris is presented here, with several new functions, including Audio Unit, VST and VST Windows versions of the plug-in. Two user modes have also been integrated. In “Free Mode” (Mode 1, like in version 1) the user can place speakers and sources where desired with no automatic sound compensations. In “Controlled Mode” the maximum level is in the centre and there is a difference between the zone inside the circle of speakers and outside of it; pan attenuation is also taken into account.

ZirkOSC came about through the need for a 3D spatialization tool integrated in an audio sequencer (Digital Performer, Logic or Reaper, as opposed to a dedicated autonomous tool). The Zirkonium was originally developed in 2005 by the ZKM in Karlsruhe (Germany) to fulfil their needs for a 3D software controlling the sonic space over a dome of speakers. They have a 47-speaker dome permanently installed in their concert hall as well as an additional 24-speaker minidome in a production studio. Our project was to integrate the Zirkonium into a Mac audio sequencer in the form of an AU plug-in. This way, each track can be spatialized independently already during the compositional process. The history and functions of the plug-in, as well as the new version 2 of the plug-in are presented here.

New developments in the newest versions of Octogris and ZirkOSC include Open Sound Control input, so that any device with OSC output can now control the plug-ins. Pre-defined movement patterns can also now be programmed in both the plug-ins.

Robert Normandeau holds an MMus and DMus in composition from Université de Montréal, where he studied with Marcelle Deschênes and Francis Dhomont. His work figures on seven solo discs: Lieux inouïs, Tangram, Figures, Clair de terre, Palimpsestes, Puzzles (all on empreintes DIGITALes) and Sonars (Rephlex). He has been awarded three Opus Prizes from the Conseil québécois de la musique. Among his international recognitions are the Golden Nica at the Prix Electronica 1996 (Linz, Austria), First Prize at the Concours international de musiques sacrées 2002 (Fribourg, Switzerland), the production prize at the Giga-Hertz Preis 2010 (Karlsruhe, Germany) and First Prize in mixed music at Musica Nova 14 (Praha, Czech Republic). He was awarded the Masque 2001 for Malina and the Masque 2005 for La cloche de verre, given by the Académie québécoise du théâtre. He is Professor in Electroacoustics Composition at Université de Montréal since 1999.
http://www.electrocd.com/en/bio/normandeau_ro

11:15–12:45 • Session #2: Issues of EA Performance and Distribution

Venue: Canadian Music Centre
Chair: Matthew Fava

Further Access Beyond Concert Performance: Practical consequences of on-demand online electroacoustic music streaming
by Jean-François Denis

Following up on topics addressed a year ago at TES 2013, this presentation includes a historical overview of the Montréal-based electroacoustic music label empreintes DIGITALes since its founding in 1990. Schematizing the changes in the cultural and social landscapes brought about by the technological changes in the means of communication, we question the relevance of a “label” and its role in curating, presenting and outreaching, finally positioning version two of the on-demand online streaming service electro:thèque <electrotheque.com> and comparing this model to other similar services.

Jean-François Denis first discovered electroacoustics during a summer course given by Kevin Austin in 1981 at Concordia University in Montréal (Québec). Hooked, he pursued music studies at Mills College in Oakland (USA) under David Rosenboom (MFA, 1984). He worked in live electroacoustics (solo and in ensembles) and created works for dance and multimedia into the mid-90s. He is the artistic director of the empreintes DIGITALes record label. In 1994, for his exceptional commitment to Canadian composers, he was awarded the Friends of Canadian Music Award organized by the CMC and the CLC. In 1995 he was presented the Jean A. Chalmers Award for Musical Composition — Presenters’ Award for his contribution to the diffusion of new Canadian (electroacoustic music) work. In 2011 he was awarded the Prix Opus 2009–10 “Artistic Director of the Year” for his 20-year involvement in sound recording publishing / production. [empreintes DIGITALes]
http://www.electrocd.com/en/bio/denis_je

Twiddling and Twerking: Thoughts on electroacoustic music performance
by Steven Naylor

Here we take a broad (and slightly wary) view of the current state of performative electroacoustic music, from the perspectives of both performer and audience. We first consider performance gestures that follow from performer interaction with music technology (“twiddling”). Are those gestures, and the corresponding visible feedback from related devices, primarily functional or largely theatrical? Are they imitative or instinctive? How do visible actions correspond to sonic results? And how do performance gestures in electroacoustic music compare with those associated with a traditional acoustic instrument?

We next assess the role of conspicuous arrays of technology, such as might be placed on a concert stage or deployed throughout a venue for acousmatic music (“twerking”). Is such an array a requirement for musical or spatial exploration? Or is it deliberately theatrical — perhaps even gratuitous? And how does the prominent, visual presence of technology affect an audience’s response to the music itself?

Within our discussion of both areas, we also take into account the blurring of boundaries between “popular” electronic music and “serious” electroacoustic music, and consider how that blurring may affect the interconnections between doing, seeing and hearing.

Steven Naylor composes electroacoustic and instrumental concert music, performs (piano, electronics, seljefløyte) in ensembles concerned with both through-composition and improvisation, and creates scores and sound designs for theatre, film, television and radio. His concert works have been performed and broadcast internationally; his theatre scores have played to live audiences of over 5 million, in 13 countries. Steven co-founded Nova Scotia’s Upstream and Oscillations Festival, and is a former President of the Canadian Electroacoustic Community. He is presently artistic director of subText and an Adjunct Professor in the School of Music at Acadia University. His first solo DVD-A of electroacoustic works, Lieux imaginaires (empreintes DIGITALes), was nominated for a 2013 East Coast Music Award. Steven completed the PhD in Musical Composition at the University of Birmingham, UK, supervised by Jonty Harrison.
http://sonicart.ca

The Pipe
by Jean-François Laporte

My research and sound experiments in the past 14 years have led me to develop and make my own musical instruments to respond more to my artistic needs. The Pipe is one of the latest results of this research. The Pipe is a musical instrument using a new version of a previous instrument I developed, called the Tu-Yo. This new instrument is mechanised and robotised in order to be able to generate a completely full range of sound.

Jean-François Laporte is a Québecois artist active in the contemporary art scene since the mid-1990s. He pursues a hybrid approach integrating sonic arts, musical composition, performance, interpretation, installation and digital art. Trained at Université de Montréal and IRCAM, his compositional approach is built on active listening and observation of the reality of each phenomenon. For over a decade, he has invested much energy in the development and fabrication of new musical instruments that are integrated in his works installations. He also collaborates with contemporary dancers. Since 1993, Laporte has written more than sixty works that were premiered and performed both nationally and abroad. He has been awarded numerous prizes, of which many by the Conseil québécois de la musique (CQM), as well as first prizes in the international competitions Luigi Russolo (2003) and Città di Udine (2004). He is the founder, artistic and general director of Productions Totem contemporain (PTC), and his works are published by Babel Scores Publishing.
http://www.jflaporte.com

Day 3 — Friday 15 August

Click here to return to the symposium schedule.

09:30–11:00 • Session #3: Perspectives on Cultural Meaning, Storytelling and Education

Venue: Canadian Music Centre
Chair: James O’Callaghan

Composing Electroacoustic Music in the Intercultural Context: Cultural meanings and uses of timbre
by Jeffrey Roberts

In the context of intercultural composition, searching for shared æsthetics or music elements provides a potential for the integration of differing music traditions. Timbre, often a valued musical element in different cultures, is an attractive musical element to focus on in intercultural composition, especially in the context of electroacoustic music, where timbre can be closely analysed and manipulated. While such manipulation can provide a wealth of material for compositional use, such alteration can also have a destructive effect on the identity of an instrument, especially in a context where the goal of the music may seek to preserve and balance the æsthetic and sonic identities of multiple music traditions.

This paper looks at both practical and æsthetic concerns with working with timbre in an electroacoustic intercultural context. In the practical context, we will first provide a brief review of spectral characteristics of multiple East Asian and Western instruments to establish a platform for discussing basic timbral similarities-differences, possibilities of spectral integration and electronic manipulations where instruments begin to lose their sonic identity. Then we will discuss the nature of timbre and cultural identity as both a significant sonic and æsthetic factor that defines an instrument’s cultural identity and that can aid in the preservation of that identity in the context of timbral manipulation. Cultural-æsthetic meanings related to timbre will also be discussed as possible shaping forces in how timbre is manipulated and integrated with another music tradition.

Throughout the presentation, examples from various 20th- and 21st-century works will be referenced to demonstrate issues in the balancing of cultural-timbral identity in an electroacoustic intercultural work. Included in these examples will be the author’s own composition, Twelve Landscape Views, III, for guqin, saxophone and electronics. This work specifically addresses integrating timbral elements from a Chinese-Daoist, New England Transcendentalist cross-cultural æsthetic perspective by working with accelerometer-controlled post-attack transient resonances of guqin and saxophone, looking to integrate the spectral worlds of both while also preserving their original sonic-cultural identities.

As composer and improviser, Jeff Roberts integrates elements of music styles and cultural traditions that sonically and æsthetically resonate. His background in improvisation and experimentation combine with studies in China to shape his compositional language. His music has been commissioned and performed in the US, Europe and China. Roberts’ music has received recognition with awards and residencies (VCCA, ACA, Brush Creek, STEIM). He was a Fulbright Scholar to China, studying guqin and Chinese æsthetics. He improvises on guqin, guitar, found objects and electronics and has worked with Jin He Kim, Jane Ira Bloom, Elliot Sharp, Richard Teitelbaum and Wu Na. He researches intercultural composition and has presented papers at multiple conferences. He directs The Walden Percussion Project, a found object ensemble. Jeff Roberts holds a PhD in Composition from Brandeis University, was a visiting professor of composition at Williams College and currently teaches at the University of Alberta.
http://www.jeff-roberts.org

Una Casa de Sonidos: Sonic storytelling with Central American refugee minors
by Blake McConnell

Una Casa de Sonidos is a project that enables undocumented, unaccompanied youth in the Casa de Sueños programme in Phoenix, Arizona to tell stories using the medium of sound. It discusses theoretical precedents for community-based art projects as well as other relevant conceptual and formal foundations. It also details the procedures involved in facilitating workshops at Casa de Sueños, as well as the public presentation of work produced there. This discussion of presentation operates within the frame of “witnessing”, whereby the public “stands in” for the absent youth, completing the process that brings their work to life within the “house of sounds”.

A partnership with the Phoenix, Arizona-based Tumbleweed Center for Youth Development programme Casa de Sueños, Una Casa de Sonidos brings youth in the custody of the Federal Office of Refugee Resettlement into contact with a variety of audio production techniques. The sounds resulting from these experiments populate an interactive sound installation that responds to the presence and movement of visitors. Through use of custom software and live, optical tracking, gallery visitors experience the sonic narratives of Casa de Sueños youth as an immersive, interactive audio environment where every movement is observed.

Every year, thousands of unaccompanied minors attempt to enter the US without proper legal documentation. The Homeland Security Act of 2003 endowed responsibility for caring for these minors in federal custody to the Office of Refugee Resettlement. The youth remain in custody for an average of 45 days, during which time the federal government reviews their refugee status. A variety of factors compel youth to migrate, including ethnic persecution, political instability in their home country and extreme poverty. These intersect in unique ways for each youth.

Casa de Sueños provides case management and reunification services to this underserved population. While living at Casa de Sueños, youth have access to services, schooling and vocational training. As part of this vocational training programme, I, along with other local artists and educators, have the opportunity to expose youth to art-making practices and to teach them technical skills. Despite the resources available to youth while living in the house, their incarceration there imposes certain constraints: they cannot leave unsupervised and all their activities are monitored.

The “house of sounds” metaphor arises from the contradiction between the benefits provided by Casa de Sueños and the freedoms curtailed there. A house, like the box used as the body of a homemade instrument, is a container designed to hold and protect but also to constrain and constrict. This tension produces a tone within the space of Casa de Sueños. Sonic exploration empowers youth, despite the limitations imposed by their incarceration, to tell stories using homemade, sound-making devices constructed during hands-on activities. Ultimately, participants in Una Casa de Sonidos create sound works they can truly take ownership of, constructing, from start to finish, their own “house of sounds”.

Blake McConnell is a media artist, musician and engineer, currently living in San Francisco, but originally from Atlanta, Georgia. His work manifests in many ways, but sits at the intersection of media, technology and society. He holds an MFA with a concentration in Arts, Media and Engineering from Arizona State University. His collaborative work has been shown at the Young@Art Gallery at the Scottsdale Museum of Contemporary Art and 516 Arts (Albuquerque, New Mexico), and was part of the International Symposium on Electronic Art (ISEA) 2012. He is a recipient of the 2012 Good n’ Plenty Grant funded by the Scottsdale Museum of Contemporary Art and Scottsdale Public Art.
http://www.blakemcconnell.com

The Laptop Orchestra as a Framework for Transformational Education
by Eldad Tsabary

In the second half of the twentieth century, the evolving acousmatic tradition has laid down the historical, perceptual, schematic and technical foundations of digital electroacoustic (EA) music. Technological advances and further solidification of these foundations in the past decade allowed digital EA music-making to gradually expand from solitary studios to the domains of live communal EA — laptop ensembles. In live, real-time settings, EA musicians need fluency in the language of EA — more so than in the studio settings (similar to how fluency is more necessary in speech than in writing). Furthermore, the communicative needs that emerged in the ensemble setting emphasized the inadequacy of traditional notation and conducting to communicate EA’s schemata, sonic parameters and processes; this is primarily due to an insufficient flexibility to adapt to the ever-evolving language of EA. Concordia Laptop Orchestra (CLOrk) is an ensemble of 20–25 laptop performers, which operates in the framework of a university course, built around participatory production of interdisciplinary and networked laptop-orchestra performances.

Musical mediation in CLOrk’s activities typically begins with an eidetic reduction — a phenomenological process used for parsing the artistic ideas into their essential components, therefore permitting their mediation to performers across disciplines and locations one parameter at a time. In CLOrk’s performances, this mediation has been accomplished through asynchronous (non-real-time) devices such as text scores, quick-access scores and technological design, and synchronous (real-time) devices including Soundpainting, conductor multiplicity, cue sheets and text chat. Due to the indeterminable needs of innovative and experimental performances, mediative design in CLOrk’s activities is typically addressed through a “whatever works” philosophical approach, which depends on collaborative observation, reflection, flexibility and action. In this presentation I provide an overview of communicative essences, asynchronous and synchronous mediative devices, and the reflective processes in CLOrk’s performances. The presentation also includes a survey of relevant background information, audiovisual examples and score excerpts.

Eldad Tsabary is a professor of EA at Concordia University (Montréal). He founded the Concordia Laptop Orchestra (CLOrk) and directed it through interdisciplinary collaborative performances with symphonic, chamber, jazz and laptop orchestras, soloists, dancers and VJs, and through telematic, telemetronomic and networked performances. For almost a decade, Tsabary has also been the primary developer of an aural training method specialized for EA at Concordia University. In recent years, he has organized numerous events, including Hug The World 2012 (a telematic jam session involving 23 locations), 60x60 Order of Magnitude (a music/dance/video show involving music by 600 composers), symposia including Understanding Visual Music and Concordia Live and Interactive Electroacoustic Colloquium, among others. As composer, his works have won prizes and mentions internationally, and have been released on over 20 albums and performed in hundreds of shows worldwide. Eldad received his doctorate in music education from Boston University. He is the current president of the CEC.
http://www.yaeldad.com

11:15–12:45 • Session #4: Studies in Sound

Venue: Canadian Music Centre
Chair: Elainie Lillios

An Audio Programming Language Informed by the Theory of Cognitive Dimensions
by Tanya Goncalves

Society is characterized and controlled by code technologies, and yet participation and engagement with code is far from universal. By leveraging artistic interest, audio programming promises to narrow this digital divide. The principle goal of this project is to make audio programming languages more inclusive. How can we make audio programming simpler for novice users? How can we teach audio programming more effectively? How can audio programming provide a foundation for broader coding practices?

In order to address these questions, the author has proposed a two-part research programme. The first part consists of a comparative analysis of audio programming examples in terms of Thomas Green’s cognitive dimensions. Green outlined 14 dimensions of code languages that influence the success of design tasks. Informed by this analysis, the second part involves the production of a small, original audio language, leveraging the existing SuperCollider language.

Recent research in the arts and humanities has emphasized considerations of accessibility and inclusiveness in connection with digital media programming languages. Sonic Pi facilitates engagement among students using live coding, and is built upon the educationally targeted Raspberry Pi boards. The processing environment has been rapidly adopted by a wide community of artists, designers and media arts educators. The ixi lang live coding environment was designed for simplicity, and represents events and patterns spatially, “thus merging musical code and musical scores” (Magnusson). Both ixi lang and the recent Overtone language are built on the SuperCollider language. Laptop orchestras, such as the Princeton Laptop Orchestra are excellent environments for research into new audio languages, and the McMaster Cybernetic Orchestra is a large ensemble that features live coding as its main activity.

In this presentation, the author provides an overview of the comparative analysis of audio programming examples, in relation to Thomas Green’s 14 dimensions. She discusses the design considerations for her new language, and then demonstrates the new language she has created with a brief live coding performance.

Tanya Goncalves is an artist-researcher who has a focus in audio programming, live coding and electroacoustic composition. Her work explores a curiosity for sound and the complex relationships between computer programming and the development of musical composition. She is currently an undergraduate student at McMaster University (Hamilton ON), where she studies Multimedia and Communications. She works closely with McMaster University’s Cybernetic Orchestra, a laptop orchestra that specializes in live coding in order to produce intricate improvisations and compositions. Goncalves is also a co-founder and the VP of communications at MacGDA (The McMaster Game Development Association), which is a student-run organization dedicated to creating video games and teaching others about video game design. A regular at Hamilton’s Art Crawl, her most recent presentation was The Conditioning, an 8-channel acousmatic composition.

Qualitative Quantity: A Phenomenological investigation into the compositional potential of audification
by Joshua Horsley

This paper considers the sonification of corporeal spatial occupancy from a phenomenological perspective, in order to engage with the status of sonification within musical praxis. Audification, sonification, musification, auditory display — audio realisation of non-audio data is itself phenomenological. It is, as Heidegger stated in 1926, “to let that which shows itself to be seen from itself in the very way in which it shows itself from itself,” and it is precisely when the phenomena is “uncovered” that questions begin to form in regards to the process of sonification in terms of status within creative arts practice.

Rigour of method within a sonification facilitates a realisation that is prescript: method as composition, instruction as notation and adherence as realisation. Whilst finding congruence with algorithmic composition, within musical praxis sonification is isolated. Internality of the subsequently realised sound or music is not only absent, realised sound or music as perceivable externality can be described as functional so far as it contains data, in which case, the qualities of a sonification are arguably “cheesecake”. Qualitative compositional decisions — for example, wherein sound or music that exists first as heard within internal consciousness (as with Husserlian phantasy) is brought forth as perceivable externality, or wherein sonic intention (as opposed to the internal heard) is met through definition and completion of algorithmic construct — are discounted in favour of integrity of process toward quantifiable outcome. This is to say that sonification is an Objective creative praxis and the success of a sonification is quantifiable, it is measured against representational accuracy. However, as a music practitioner, I seek to embed my practice within quantifiable accuracy of sonification only to subsequently liberate qualitative compositional decisions. Within this paper it is not intended to conclude sonification as quantifiable, as applied science, rather that through practice-informed phenomenological investigation, the qualities within sonification practice are sought.

As consequence of investigative sonification practice, temporality is given specific address, and interrogation of compositional miniatures is referenced so to critically and reflectively evaluate temporality as of essence to music’s inherent qualitative attributes, whilst simultaneously considering the substantial difference in existential and temporal status between audio and corporeal spatial occupancy in terms of their homogeneity. Subsequently, the paper is indicative of a requirement for further study of sonification within creative music practice, and it speculates as to the affect upon composition as consequence of orientation of sonified objects within the same spatial environment, thus: sonification of occupant corporeal spatial occupancies, or phrased simply, the compositional potential of multiple sonification.

Joshua Horsley is an artist / composer from England. Finding congruence with his current Doctoral pursuits, Joshua’s primary creative outputs concern the philosophical investigation of temporality within composition, with additional interests embedded in musical address of heterarchical collaboration, object reality and subjective realities. Joshua is an Associate Lecturer at the University of Central Lancashire (UK) and his work is disseminated internationally.
http://www.joshhorsley.co.uk

Studies in Sound and Vibration
by VibraFusionLab (David Bobier)

VibraFusionLab is an innovative London, Ontario project that provides opportunities for the creation and presentation of multi-sensory artistic practice and partners with other arts-related organizations in achieving this. VibraFusionLab is an interactive creative media studio that promotes and encourages the creation of new accessible art forms, including the vibrotactile.

Providing the opportunity to create compositions and expand artistic practices that are designed to be experienced as tactile stimuli (rather than sound), this art form is new and almost completely unexplored. It is accessible to hearing, deaf and hard-of-hearing artists and audiences alike. VibraFusionLab represents considerable potential in generating artistic development and innovative research in vibration and the tactile as an artistic modality. Our holistic approach considers vibration as an art form and means of creation and exploration, development and integration of greater multi-sensory components and experiences into various art disciplines. It furthers our desire of combining alternative language, communication and emotional strategies and experiences into artistic practice. In addition, the use of this type of interactive multimedia, multisensory approach provides those with different abilities to enjoy equal participation. It provides the potential for making various forms of artistic expression more accessible. It offers new and unique approaches for those with differing abilities to produce art using an innovative tactile mode and opens alternative opportunities through artistic expression, to explore the integration of sound and vibration with the tradition of physical and emotional healing practices in various cultures.

VibraFusionLab is partnered with Inclusive Media and Design Centre, Ryerson University and Tactile Audio Displays (TADs Inc.). The development of VibraFusionLab is funded through Social Sciences and Humanities Research Council, Canada Council for the Arts / Grande NCE, Ontario Arts Council and London Arts Council.

David Bobier holds an MFA from the University of Windsor and a BFA from the Nova Scotia College of Art and Design. His work has been exhibited in Canada, the United States and England, and has been the focus of prominent touring exhibitions in Ontario and Atlantic provinces. Bobier has received grants from Canada Council for the Arts, Social Sciences and Humanities Research Council, Grand NCE, Ontario Arts Council and New Brunswick Arts Council. He is partnering with Inclusive Media and Design Centre, Ryerson University in exploring vibrotactile technology to create vibratory “compositions” and to investigate broader applications of the sensory interpretation and emotionality of sound and vibration in art making. Bobier is the founder and director of VibraFusionLab, emphasizing a holistic approach to considering vibration as a language of creation and exploration, founder and chair of London Ontario Media Arts Association, Director of Development for Toronto International Deaf Film and Arts Festival, and a board member of Media Arts Network Ontario.
http://www.davidbobier.com

14:30–16:00 • Keynote Lecture

Venue: Wychwood Theatre
Chair: Darren Copeland

What Matters? Make the Music!
by Pauline Oliveros

This keynote address offers a survey of a variety of changes in technology reflected in my work beginning with Time Perspectives (1961) through The Mystery Beyond Matter (2014). All my work involves with performance in real time, whether presented as fixed media or in live performance, and whether the source is acoustic or electronic sound or acoustic sound processed electronically.

What matters is the making of the music rather than the category.

Pauline Oliveros is a senior figure in contemporary American music. Her career spans fifty years of boundary dissolving music making. In the ’50s she was part of a circle of iconoclastic composers, artists and poets gathered together in San Francisco. Recently awarded the John Cage award for 2012 from the Foundation of Contemporary Arts, Oliveros is Distinguished Research Professor of Music at Rensselaer Polytechnic Institute (Troy NY) and Darius Milhaud Artist-in-Residence at Mills College. Oliveros has been as interested in finding new sounds as in finding new uses for old ones — her primary instrument is the accordion, an unexpected visitor perhaps to musical cutting edge, but one which she approaches in much the same way that a Zen musician might approach the Japanese shakuhachi. Pauline Oliveros’ life as a composer, performer and humanitarian is about opening her own and others’ sensibilities to the universe and facets of sounds. Since the 1960s she has influenced American music profoundly through her work with improvisation, meditation, electronic music, myth and ritual. Pauline Oliveros is the founder of Deep Listening, which comes from her childhood fascination with sounds and from her works in concert music with composition, improvisation and electroacoustics. Oliveros describes Deep Listening as a way of listening in every possible way to everything possible to hear no matter what you are doing. Such intense listening includes the sounds of daily life, of nature, of one’s own thoughts as well as musical sounds. “Deep Listening is my life practice,” she explains, simply. Oliveros is founder of Deep Listening Institute, formerly Pauline Oliveros Foundation.
http://paulineoliveros.us

Day 4 — Saturday 16 August

Click here to return to the symposium schedule.

09:30–11:00 • Session #5: Approaches to Spectralism and Visuals

Venue: Wychwood Theatre
Chair: Shawn Pinchbeck

Spectralism and Microsound
by David Litke

In his book Microsound, Curtis Roads examines the acoustic and psychoacoustic properties of collections of discrete sonic events, with a particular focus on rapid series of short “sound grains”. While his study is primarily concerned with the electroacoustic technique of granular synthesis, Roads includes physical phenomena in his discussion as well: natural textures such as the sounds of rain and fire may be understood from a microsonic perspective, as may instrumental techniques such as tremolo, flutter-tongue, and contemporary approaches to sound-mass and stochastic composition. For composers interested in combining acoustic instruments with electroacoustics, the connections that Roads makes are of particular interest; although the capabilities of acoustic instruments and electroacoustic synthesis differ, approaching granular synthesis and instrumental composition from a common perspective suggests new possibilities for combining the two practices.

Many of the issues surrounding microsound bear similarities with those engaged by spectral composers. Particularly in the technique of instrumental re-synthesis (whereby the partials of a source spectrum are reproduced by instrumental pitches), spectral music deals with issues of perceptual boundaries concerning cohesion versus separation, as well as the effects of the individual components’ characteristics on the global effect. These dynamics are mirrored in granular synthesis, where the properties of individual grains and the parameters of their combination interact to produce Gestalts that are more or less cohesive and exhibit a range of emergent spectral properties. The aim of this paper is to examine the intersection of these three techniques: electroacoustic granular synthesis, instrumental microsonic textures and spectrally based composition.

A discussion of some of the acoustic and psychoacoustic aspects of both microsound and spectralism provides a point of departure. By modelling granular synthesis or instrumental textures on spectral analysis data, dynamics inherent in each field may have significant effects on the others. Having established the theoretical issues at play, the paper proceeds to demonstrate practical applications of these concepts in the author’s compositional work. Example patches in OpenMusic, Max and SuperCollider elucidate pre-compositional processes; these tools facilitate the interaction among phases of spectral analysis, granular synthesis and stochastic texture generation.

David Litke holds degrees in composition from the University of Toronto and the University of British Columbia, having completed doctoral studies at the latter under the supervision of Dr. Keith Hamel. Since completing graduate studies in 2008, he has taught courses in electroacoustic music and music theory at UBC and the University of Windsor. His music has been performed by many of Canada’s finest musicians, including the National Broadcast Orchestra, l’Ensemble Contemporain de Montréal and the Turning Point Ensemble. His work has been recognized nationally and internationally, in composition competitions (NBO, SOCAN and CUMS competitions) as well as in emerging composers’ programs (ECM’s Génération 2006, NAC Young Composers 2008, Bozzini Quartet’s Composer’s Kitchen 2010, acanthes@ircam New Technologies 2012, Composit 2013). He has also been active in electroacoustic music research, and has presented at ICMC 2007, SMC 2007 and TES 2013 conferences on gestural control, score-following and spectral music.
http://davidlitke.net

Three Electroacoustic Artworks Exposing Digital Networks
by Aaron Hutchinson

Powerful, inexpensive computers are now common and artists are taking advantage of æsthetic developments that only the computer can facilitate. Technology and music are integrated in our contemporary moment on fundamental levels of creation, performance, documentation and communication, and encourage algorithmic composition, live coding and networked music. The author discusses three interactive, immersive electroacoustic artworks that broadly investigate the nature of ubiquitous digital networks and, specifically, the functional and æsthetic aspects of digital network technologies that can be exploited by contemporary media artists.

Multinodal is an environment that promotes community among Hamilton’s galleries and art patrons, collapsing physical and psychological barriers to experiencing the city. Data collected via a console is transmitted over an Internet protocol network to support a conversational, telepresent audience in the Factory Media Centre (FMC) and Hamilton Audio Visual Node (HAVN). Monitored in HAVN and the FMC, an algorithmic, audiovisual composition unfolds according to preconceived constraints combined with inputs from the complementary space. Live-streamed camera and microphone feeds are processed and mixed into both rooms to construct additional telepresence.

Revolution 2 is part of an on-going investigation into the musical possibilities of multiple laptops arranged in a ring formation. The work continues from my highly physically choreographed composition for laptop orchestra, Revolution. Rather than moving people, Revolution 2 exploits the nature of networks (specifically Ethernet Local Area Networks), making apparent and exaggerating network delays in a “telephone game” for laptop orchestra. Each player in the game can launch simple synthesizer programs which travel counter clockwise via Ethernet connection and sound on all other players’ laptops. The network transmission necessitates buffering of data and thus a delay that can be operationalized as a creative musical device. The sonic result is a collaborative, spatialized performance that references yet falls apart around a strongly timed beat. Acoustic musicians anticipate to infer strongly timed performances. Laptop musicians overcome and create such anticipation in software to deal with network delays.

DACADC exposes the inevitably analogue nature of digital networks. The interactive installation poses two laptops on opposite walls of HAVN. Each is connected to its own microcontroller. The microcontrollers are connected to each other via digital pins. One laptop performs binary transmission to the other terminal. The hook-up wires are large and apparent in the room, travelling across the floor to a plinth station, and further across the floor to the receiving laptop. Participants are invited to interact with the plinth station, which houses assorted analogue circuitry to route, drop and manipulate the transmission. The transformed networked communication controls a 4-channel SuperCollider musical composition, guided over the course of three hours via slowly changing, pre-programmed algorithms. The audience is invited to intervene physically in the digital communication process, exposing the analogue nature of digital networks.

Aaron Hutchinson is a musician and sound artist from Hamilton, Ontario. Aaron currently creates music as Hut the Believer, Eschaton, Haolin Munk, and as a member of the Cybernetic Orchestra. These ensembles have taken his work to Karlsruhe, Montréal, Toronto, Ottawa, Kingston, Guelph, Peterborough and Hamilton. Aaron has performed with the Toronto Symphony Youth Orchestra, Hamilton All-Star Jazz Band, Redeemer Sinfonia and Hamilton Philharmonic Youth Orchestra, where he serves on the Board of Directors. He won the 2012 Hamilton Arts Award for emerging artist in New Media, and is a founding member of Hamilton Audio Visual Node.
http://soundcloud.com/hutthebeliever

Visual Suspension and Audiovisual Acousmatics: Noise and silence in visual music
by Joseph Hyde

Exploring the application of acousmatic thinking to (audio)visual media, the author in particular takes the idea of reduced listening as a starting point, and proposes a related sub-current through 19th-/20th-/21st-century visual culture in which creative expression is divorced from the depiction of something seen. This endeavour can be traced back to the theories (if not necessarily the practice) of the cubist movement, but lies in particular at the core of the great 20th-century project of abstraction in fine art and (to a lesser extent) cinema. Furthermore, it is central to the early evolution of visual music through the works of Klee, Kandinsky, Richter, Eggeling and Fischinger.

Focused examples of purposeful visual disassociation in 20th century abstract art / cinema and visual music are outlined before a discussion of key 21st-century visual music works. The author concludes with a discussion of the challenges of achieving the state of “visual suspension” proposed in the context of a highly visual culture. Visual “noise” and “silence” are discussed as powerful devices in achieving visual disassociation. These phenomena are discussed in the context of works by artists such as Jackson Pollack, Stan Brakhage and Thomas Wilfred, and of the author’s own works, Zoetrope, End Transmission and Vanishing Point.

Joseph Hyde’s background is as a musician and composer, working in various areas but in the late 90s — and a period working with BEAST in Birmingham — settling on electroacoustic music, with or without live instruments. Whilst music and sound remain at the core of his practice, collaboration has since become a key concern, particularly in the field of dance. Here he works both as a composer and with video, interactive systems and telepresence. His solo work has broadened in scope to incorporate these elements, and he has made several audiovisual “visual music” works, and has written about the field. Hyde also works as a lecturer / academic, as Professor of Music at Bath Spa University (UK), where he teaches in the BA Creative Music Technology, runs the MMus in Creative Sound and Media Technology and supervises a number of PhD students. Since 2009 he has run a symposium on Visual Music at the university, Seeing Sound.
http://josephhyde.co.uk

11:15–12:45 • Session #6: Lecture-Recitals

Venue: Wychwood Theatre
Chair: Adam Tindale

[Lecture-Recital] “In Flight” and Audio Spray Gun: Generative composition of large sound-groups
by Richard Garrett

In Flight (2014) is a fixed-media composition for 8-channel surround sound. It was composed in its entirety using groups of sound events generated by a programme called Audio Spray Gun, written in SuperCollider.

Audio Spray Gun uses generative process to create large collections of sound events derived from a single sample. In this process, each sound event is treated as a point in a four-dimensional space whose axes are inter-onset interval, sample playback rate, azimuth and distance from the listener. Events within this space are generated by using simple rules to constrain otherwise random choices within the bounds of a given locus. The system uses a graphical interface to define a “path” which controls the transformation of this locus over time by means of expansion, contraction and translation within the parameter space. When the process is executed, a sequence of events is generated along this path, which is then converted into multi-channel audio for recording and insertion into compositions.

If the same process is run a number of times, the characteristics of its constituent events may be highly randomised but the gross features of the group will be strictly defined. Thus, the results will be near identical at the meso-time scale but will differ at the level of individual sound events.

Sound groups generated in this fashion can be made up of several hundred events running over periods of a few seconds to a few minutes. While each event is stationary and distinct, the overall effect of so many overlapping sounds can produce dramatic apparent motion around the sound space. In Flight uses predominately noise-based sounds to investigate these processes.

This lecture-recital will consist of a performance of In Flight and a demonstration of how some of its components were created using Audio Spray Gun.

Richard Garrett is a composer who specialises in the use of fuzzy logic for algorithmic composition, audio processing and manipulation. His generative music has been exhibited at the Ars Electronica festival (Linz, Austria) and his installation Weathersongs, which generates electronic music in real time from the weather, has been presented in both Wales and Italy. He uses Max and SuperCollider programming languages extensively in his composition and has written a suite of software modules called nwdlbots (pronounced “noodle-bots”) for generative composition within Ableton Live. Richard studied algorithmic computer music with David Cope and Peter Elsea in Santa Cruz (California) and completed a Master’s degree in Composition and Sonic Art at Bangor University (Wales) in 2013. He is currently an ARHC PhD scholar at Bangor, conducting practice-based research into the application of fuzzy logic to composition.
http://www.sundaydance.co.uk

[Lecture-Recital] Resounding but not Sounding: Diffusing into interior spaces of instruments
by James O’Callaghan

The acousmatic listening situation presents a unique de-coupling of sound production and produced sound, of physical gesture and aural gesture. While conventional acousmatic concert diffusion aims to provide a “neutral” and “invisible” reproduction of sound, the re-introduction of sound-producing objects and instruments in a concert setting can afford a fascinating perceptual blurring of sound and source, as well as new possibilities for investigating space and spatialization.

Much of my recent work has involved the diffusion of electroacoustic sound into loudspeakers placed inside of or onto musical instruments. In this lecture-recital I will discuss several recent mixed and acousmatic works and some of the conceptual and compositional issues emerging from them, and perform from among them a pair of linked pieces: Objects-Interiors for diffusion inside a piano, and Bodies-Soundings for diffusion into an acoustic guitar and toy piano.

An initial consideration in the works is that of objecthood and sounds endogenous and exogenous to the objects in question. For instance, most listeners will have a perceptual expectation for what kinds of sounds an instrument can produce, and how it typically functions. I will outline some of the compositional strategies I have applied in navigating these issues: moving between using sounds sourced from or related to the instruments, and “outside” sounds. A major concern is examining to what extent these different approaches affirm, expand or contradict the identity of the instruments.

Another important aspect implicated in these works is the idea of causality — even in the acousmatic works there is a disjunct between the idea of sound production and acoustic result. Though there are no performers and no rational reason to expect the instruments to physically “produce” the sound as such, the emergence of sounds from the instruments may encourage an imagined source bonding and related conflict in perception. In the mixed works I will also discuss in my presentation, this percept is increasingly blurred, as the source of the sound (physically “caused” by the instrument or resounding in the instrument from an electronic source) is always in flux.

Finally, I will also address the elements of space and spatialization in these works. In each of them, the sound moves between localisation from the instruments and the larger space of the concert hall. I will discuss some compositional concerns regarding the relationship between the musical material and this larger spatial movement, and considerations of space as an articulation of form in this context. Finally, the colour of space becomes a significant aspect of the musical discourse, as the instruments-as-speakers affect the timbre of the electroacoustic source sounds considerably. I will discuss different approaches I have taken to this consideration, from reinforcing “redundancy” of space (recording sounds inside of instruments and playing them back in the same space, or using impulse responses from the interiors of instruments) to contradicting and expanding the spatial colour of the instruments.

James O’Callaghan is a composer and sound artist based in Montréal. His music intersects acoustic and electroacoustic media, employing field recordings, amplified found objects, computer-assisted transcription of environmental sounds and unique performance conditions. In 2014, his work Isomorphia for orchestra and electronics was nominated for a JUNO Award for Classical Composition of the year and won the SOCAN Foundation John Weinzweig Grand Prize. In 2013, his electroacoustic work Objects-Interiors won first prize in the Jeu de temps / Times Play awards. Recently, he received a commission from the Groupe de Recherches Musicales for a new acousmatic piece, to be produced in 2015 in Paris and Montréal. He received a Master of Music degree in composition from McGill University in 2014, studying with Philippe Leroux, and a Bachelor of Fine Arts from Simon Fraser University in 2010, studying with Barry Truax, David MacIntyre, Rodney Sharman and Arne Eigenfeldt.
http://www.jamesocallaghan.com

15:30–16:30 • Session #7: Historic Perspectives and Sound Landscapes

Venue: Wychwood Theatre
Chair: Emilie LeBel

Sound Landscape Memory
by Don Hill

Sound never ages. A sonic pitch — the note C, for instance — is immutable, if the conditions that give rise to that pure frequency are maintained; it is an ideal carrier wave for cultural memory. Sound never disappears. Like a river that goes underground, it can pop up in unexpected places. The presenter will describe how ancient landscape architecture is infused with sonic memory and how it plays back over time — speaking directly to the human central nervous system; the recordings he has made in situ of “songs in the land” will serve as a potent demonstration.

Don Hill is a sound artist and broadcaster, as well as an associate researcher at Laurentian University’s Behavioural Neuroscience Laboratory. He is interested in the subjectively transparent and how “it” can objectively be made apparent. For instance, he completed an investigation of the psychoacoustic properties of carillon bells atop Edmonton’s City Hall and how specific pitches associated with each chime relate to and affect the acoustics of a large public gathering space in the heart of the downtown core. Hill recently presented new research which tells of VLF transmission arising from the sonic architecture embedded in a 5300-year-old medicine wheel, an alignment of placed stones spread out over 20 square kilometres on the Canadian prairies.

Recreating Robb: The Sound of the world’s first electronic organ
by Michael Murphy and Max Cotter

Canadian (Frank) Morse Robb of Belleville, Ontario was the first inventor to succeed in developing an electronic organ in 1927. His work was groundbreaking, influencing the field of electronic musical instruments in the 1930s and 40s (including Hammond, as well as Canadian synthesizer developer Hugh Le Caine). However, like too many other notable Canadians, his contribution to the field is greatly underappreciated. Fewer than 20 instruments were built and until recently it was believed that no complete organs survived, with only some original workshop prototypes and parts existing at the National Museum of Science and Technology in Ottawa. No phonograph recordings of this instrument are known to exist. At the time of its introduction, newspaper reports and critical response to the organ were very favourable. William Harrison Barnes reported as late as 1937 that, when comparing it to the Hammond Organ “some of the tones were equal to good (pipe) organ tones, and a better Trumpet quality was obtained than I have heard on any other electronic organ” and “the attack of the tone was much more like that of an organ than either the Hammond or Orgatron.” The authors were intrigued as to the possible reasons for the “better tones” and positive reviews, and have endeavoured to investigate the approach used by Robb.

Recently, a Robb Wave organ was discovered and donated to the National Music Centre (NMC) in Calgary. The authors report on their efforts to research the work of Robb as well as their work with the technical staff of the NMC to restore the organ in order to record it and create a sample set. All of the stops have been recreated, and these will be demonstrated at the conference. Instead of additive or subtractive synthesis, Robb used an early version of sample-based synthesis, and the stops are sonically very different than contemporary devices from the 1930s. Robb’s system used rotating discs that attempted to regenerate the entire complex waveform of the sound it was recreating. The Wave Organ was attempting to use electro-mechanical means to mimic the sound of a pipe organ. The paper will explain the technology used by Robb to “record” his samples of Pipe Organs, including the photomechanical processes used to transfer sound waves from oscilloscopes onto metal tone wheels using a hill and dale, and later a lateral-cut system. The authors will also report on Robb’s use of an almost digitized system resembling Pulse Code Modulation (PCM), some years before PCM was conceived by Alec Reeves at ITT, and a decade before the Oliver and Shannon PCM Patent.

Michael Murphy is a professor in the RTA School of Media at Ryerson University and the principal investigator of the AccessFabrik Lab. He is the former director of the Rogers Communication Centre. He is a professor in the York-Ryerson Joint Graduate Programme in Communication and Culture and teaches courses at the graduate and undergraduate level in Advanced Communication Technology, Radio and Audio Production, Advanced Audio Theory, and Broadcasting History. He also supervises graduate student research in the field of Communication, Culture and Media.
http://accessfabrik.rcc.ryerson.ca

Max Cotter is a student at the RTA School of Media, specializing in Audio Production and associated technologies. He is a research assistant in the AccessFabrik Lab, where he has been researching the work of Frank Morse Robb and the development of the Wave Organ.
http://accessfabrik.rcc.ryerson.ca

Day 5 — Sunday 17 August

Click here to return to the symposium schedule.

10:00–12:45 • Session #8: Perspectives on Notation and Live Instruments in EA

Venue: Wychwood Theatre
Chair: Jeffrey Roberts

[Lecture-Recital] Counterpoint, Expansion and Interaction: Approaches to flute and electronics
by Elise Roy

By focusing on the relationship that exists between an acoustic instrument and the myriad possibilities of electronic accompaniment, this lecture-recital will examine three different recent works for flute and electroacoustics and the implications of their respective constructions — namely, the use of contrapuntal, interactive and/or expansive potentials of acoustic and electronic coexistence. The works that will be presented and examined are Kurt Isaacson’s bokeh for flute and audio playback, Elainie Lillios’ Among Fireflies for alto flute and live interactive electroacoustics, and my own work, the dream an iceberg, for flute and live electronics.

Élise Roy has appeared as a flutist, improviser, and composer throughout the United States. Her recent electroacoustic works have been selected for performance at Electronic Music Midwest 2013, Electroacoustic Barn Dance 2013, South Carolina State University’s Electroacoustic Concert Series, 2014 inner sOUndscapes, PAS-E in Venice, Italy, 2014 New York City Electroacoustic Music Festival, and the 2014 Society for Electro-Acoustic Music in the United States (SEAMUS) National Conference. Élise was the runner-up in the national 2014 ASCAP/SEAMUS Student Commission Competition and a finalist in the 2014 ASCAP Morton Gould Young Composer Awards. Her fixed media work, bas relief (Flutescape I) is on the SEAMUS Electroacoustic Miniatures 2013: Negative Space album. Élise is currently studying in the DMA programme in contemporary music at Bowling Green State University. She holds degrees from the Oberlin Conservatory of Music and California Institute of the Arts (CalArts).
http://www.eliseroy.com

[Lecture-Recital] The Generative Percussionist: 21st-Century ideas and methods for real-time performance and expression
by Todd Campbell

Modern percussionists in this new age of music are confronted with a challenge to innovate or risk being left behind. Due to recent developments in generative software including Ableton Live and percussion hardware that allows powerful real-time control and expression, there exists creative and musical opportunities available to the percussionist that were heretofore unimaginable. In this lecture-recital, I will explore several ways that one might incorporate the technology of generative music and looping into a live performance using state-of-the-art percussion controllers. I will demonstrate how I integrate generative music and looping in my own live performances and studio recordings by performing selected examples from my most recent release, Versification. This is a recording project that incorporates generative ideas and concepts with live looping and traditional studio recording; it is a record that provides a roadmap for percussionists and others striving to incorporate new forms of expression into their composition and performance repertoire.

At the beginning of the lecture-recital, I will discuss and deconstruct two specific tracks from Versification that feature generative ideas presented in the context and within the fabric of live percussion and loop-oriented material. I will then perform the tracks live to illustrate the possibilities of utilizing generative ideas as a springboard into creativity and expression. During the performance, I will also perform a free improvisation that demonstrates the power and freedom that looping technology can provide when integrated into a live performance. The looping material will be constructed from live audience participation in near real time.

The value of this presentation is unique in the sense that percussionists rarely interact with computer-mediated generative and live improvisatory tools as a creative tool. Furthermore, the visual novelty of seeing a percussionist interact with the performance gear in this way commands attention. Information related to the presentation will be available for participants who will also be able to improvise over some play-along tracks. The instrumentation used will be the following: I will play electronic percussion via the Alesis DMPro, Ableton Live, LaunchPad and the Digitech JamMan; loops will be constructed from live audience participation.

Todd Campbell received his training in classical percussion at West Virginia University, studying with Phil Faini and Dave Satterfield. While at WVU in the mid-1990s, he took lessons in electronic music and FM synthesis with Gil Trythall — lessons and experiences that still inform his music today. In 2008, Campbell embarked on a quest to release ten solo album-length recordings in ten years. Six years and six releases later, he is finding an innovative and critically-acclaimed compositional voice utilizing electroacoustic percussion, guitar, and hardware synths and effects, combined with many years spent as a touring rock drummer. Todd Campbell’s solo performances invite you to experience a predictably unpredictable foray into a sonic bricolage, incorporating masterful drumming and avant-whacked synth wizardry that holds a curious treat somewhere in the depths. Currently, Campbell is assistant professor of music at Bloomsburg University and is completing his dissertation.
http://www.bloomu.edu/music-faculty-campbell

When Worlds Collide: Tackling graphic notation in live electronic music
by Christian Martin Fischer

When thinking about notation in live electronic music, “worlds collide” in several ways. First performers of acoustic instruments and computer musicians (regardless of utilizing computer or other electronic means) have contrasting demands on notation. Regular staff notation for acoustic instruments is impractical for computer musicians working with concrete or abstract sounds and spatialization. Moreover, there is no established notation for electronic music, yet. Instead graphics in various manifestations are used. This leads to a second “collision” as the aggravated distinction between prescriptive and descriptive notation is mostly neglected in graphic notations. Third, since a peak of graphic notation in avant-garde music in the 1960s there is the question of the level of improvisation. Musical graphics like Earle Brown’s December 1955 act as a sheer trigger for improvisation, while, for example, Anestis Logothetis’ elaborate graphic notations system provides clear and distinct structures and playing instructions. Fourth, there are two opposite fractions that argue whether there is a need for notation in live electronic music at all. And finally, the world of performers and audience collide, where the genesis and manipulation of music in the computer remains invisible even for the attentive listener and typical enjoyable features of live performances are missing.

In contemporary live electronic music, communication between performers of acoustic instruments and computer musicians is decidedly vital for a profound performance practice. Thereby, oral or written agreements, stop watches or score following have their obvious drawbacks. The steadily increasing number of artistic and scientific works about notation in electroacoustic music in recent years illustrates the growing interest in this field. Again, the computer and its possibilities to generate and manipulate graphics and video offer new opportunities for notation purposes. There are several interesting works and projects utilizing animation and motion graphics for notation purposes. Nevertheless, they approach from one of the two main perspectives. Either from the acoustic instrument’s side, where staff notation is enhanced by establishing additional symbols to indicate when and what the computer musician should do, or from the computer instruments’ side by proposing rather technical solutions, such as score following. Again the two worlds collide. A common ground that eliminates existing gaps still has to be found.

This paper proposes Motion Graphic Notation Framework (MGNF) to communicate musical actions, gestures, structure and dramaturgy for all performers and even the audience if desired. It is a framework with clear definitions on how to use motion graphics and animations for notation in live electronic music. MGNF tries to overcome gaps by using defined symbolic, associative or instructive graphics. Furthermore, it sets a common ground for all performers involved while allowing flexibility in setting up parameters for the musical performance that reach from improvised structures to clear indications and distinct playing instructions.

Christian M. Fischer studied media design at Bauhaus University Weimar and electroacoustic composition at the Musikhochschule “Franz Liszt” in Weimar and at the Estonian Academy of Music and Theatre in Tallinn. Besides several teaching assignments in Weimar, Berlin, Tallinn and Cairo, he has worked as a research associate at the Academy of Applied Sciences and Arts in Schwäbisch Hall and was head of the Media Design Department at the German University in Cairo. His field of work covers hybrid media, audio installations and electroacoustic and live electronic compositions. Currently he is a PhD candidate in the composition department at the Estonian Academy of Music and Theatre dealing with Motion Graphic Notation for Live Electronic Music.
http://c-m-fischer.de

[Lecture-Recital] New Music Notation — Score Design, Function and Role: Notation of electroacoustic sound and the problematics of representation
by jef chippewa

The notated score is hardly exclusive to instrumental music practices, but remains even today relatively unfamiliar territory for electroacoustic composers, despite the numerous scores that have been created specifically for EA works over some 60 years of modern EA practices. In scoring an electroacoustic part, two major concerns need to be considered: the type of information that is required and the intended function or role of the score in the presentation of the piece. Such concerns in the realm of EA may be in great contrast to those in the instrumental domain.

Whereas elsewhere, score and notation have been considered as inseparable (i.e. the score as a complete identity), in order to fully understand the electroacoustic work, it is important to first distinguish the different notation types and consider how effectively they can represent different aspects of sound, before considering the score itself — which may be a composite of several notation types, depending on the degree to which these materials need to be defined and how the score is meant to be used.

With over 15 years of experience in New Music notation, I am unavoidably critical in regards to the shortcomings of the various notation types but nonetheless a strong proponent of their potential when fully exploited for the representation of sound / music. Each notation type has individual pros and cons that need to be weighed against the intended use of the score. In essence a reductive representation of the piece, all forms of notation are afflicted by compromise, but the foundations of EA notation can be extended and improved with a little effort. For example, panning and volume automations can help indicate not only the proportion of the original sound used in the piece but also how individual sound files relate to each other in the piece; region names can indicate the sections and structure of the piece; display colours can be used to group similar materials. Some materials represented in graphic notation can be made more useful through transcription into traditional notation. And of course it is possible — even desirable — to combine various types of notation in a single score.

All of this is dependent on what the actual intended use of the score will be: who will need to use the score and in which context(s)? A listening score, performance score, mixing / technical log, musicological transcription and an analytical précis all have very different needs in terms of what must be represented, not to mention the degree of accuracy and detail required of the notation.

A deeper understanding of various types of notation of electroacoustic sound provides composers with a broader understanding of the potential of notation in EA, which improves the understanding of the function of their scores and the role that notation and score play in representing their creative work. For the casual listener, these reflections encourage an understanding of the different ways of reading the score and more thorough insight into the musical processes in the pieces they “represent.”

jef chippewa is a composer, specialist in the notation of new music, arts administrator, project manager and a fantastic cook. In recent works, he has explored the potential of the miniature as a way of problematizing musical form, notably with 17 miniatures (flute, extended piano, drumset, several dozen sound-producing objects) and in his footscapes (electroacoustic) and postcards (toy piano, sound objects) series. His compositions have been performed in Ai-Maako, Darmstadt Ferienkurse, FUTURA, Inventionen, ISCM, MANTIS and Visiones Sonores. Ensembles such as LUX:NM, Trio Nexus and ensemble recherche have commissioned and/or performed his compositions. In 1999, jef chippewa founded shirling & neueweise, a company specialised in New Music notation. Since 2010 he has been developing a module-based seminar, “New Music Notation: Score Design, Function and Role,” that he has given in various forms in North America and Europe. jef chippewa is Administrative co-Director of the Canadian Electroacoustic Community and Coordinating Editor for the CEC’s quarterly electronic journal, eContact!
http://newmusicnotation.com/chippewa

14:30–16:00 • Special Session: A Noisome Pestilence —An Afternoon of Hugh Le Caine

Venue: Wychwood Theatre
Moderator: Gayle Young
Panel: Kevin Austin, Norma Beecroft, David Jaeger, Jim Montgomery, Pauline Oliveros and Paul Pedersen.

“I would say that I was a worker in the vineyard, and it was a tremendously exciting vineyard. I don’t regret a microsecond of it.”
— Hugh Le Caine in an interview with Alan Gillmore and William Wallace (quoted in Gayle Young, The Sackbut Blues: Hugh Le Caine, Pioneer in Electronic Music)

Hugh Le Caine (1914–1977) is a relatively obscure figure in the musical world, but he is synonymous with the formative era of electronic music studios in the 1950s and 1960s and his impact as a composer and inventor is apparent today. His self-described “demonstrations” — most notably, Dripsody, created using his multi-track tape machine and the recording of a single drop of water — are required listening, and they are considered among the seminal electronic pieces of the 20th century. His contribution to the advancement of the synthesizer, which includes inventing the first touch-sensitive and polyphonic synthesizers, cannot be overstated.

Prodigious in his exploration of science and music from a young age, Le Caine devoted his professional life to the application of his knowledge of physics to the invention of electronic instruments and interfaces, tailoring them to the creative intuition of a musician or composer. Many of his achievements were attained in his personal electronic music studio and under the auspices of the electronic music laboratory established in 1954 at the National Research Council of Canada (NRC). In an era of dynamic investment in the arts, sciences and industry in Canada, Le Caine was able to realize his dream of “beautiful sounds”. Our gathering today is intended to celebrate Le Caine’s passion, inventiveness and creativity — attributes that made him such an important figure.

As Le Caine is increasingly consigned to discussions of musical history, we feel this is a vital conversation, to illuminate the various threads that connect him to music-making today. Furthermore, the state-sponsored investment in infrastructure that enabled Le Caine’s artistry has been reduced, and few people believe such a project would be supported today. In today’s very different climate, little emphasis has been placed on the historical achievements of institutions like the NRC and, as a consequence, institutional memory and documentation has suffered. In recognizing Le Caine’s achievements and marking his centenary we want to instil a strong memory of an era of Canadian music in which Le Caine was a pivotal figure.

Our conversation this afternoon centres around two generations of electronic music practitioners. Representing the 1950s and 1960s, Paul Pedersen and Norma Beecroft were immersed in Le Caine’s sound world, having worked directly with Le Caine instruments. Their discussion will convey something of the process, experience and inspiration involved in early electronic music studios. Beecroft spent many years documenting the history of electronic composition, and she will elaborate on the relationship between Le Caine and Gustav Ciamaga, coordinator of the University of Toronto Electronic Music Studio (beginning in 1965). Paul Pedersen was one of the first students to work in UTEMS but he was also the instigator, and Le Caine’s collaborator, during the development of the first polyphonic synthesizer.

Also active in this era was TIES special guest and Keynote Speaker Pauline Oliveros. In 1966, Oliveros was among a group of graduate students that participated in a summer course at UTEMS conducted by Le Caine and Ciamaga. She will share her impressions of Le Caine’s instruments while also discussing her role in the San Francisco Tape Music Centre and the tape centre at Mills College, where she was the director.

The second generation is represented by composition students of the 1970s, who experienced the electronic music studio at a time when commercially available equipment and techniques began to replace the inimitable (prototypical), individually crafted instruments of Le Caine.

David Jaeger and James Montgomery are both founding members of the Canadian Electronic Ensemble, established in 1971 in Toronto. In the 70s, the group explored live performance with polyphonic synthesizers and they continue to perform to this day, but their halcyon days were spent in the UTEMS. Their application of synthesizers in a live ensemble setting gives them unique perspectives from which to discuss the achievements of Le Caine’s designs and the adjacent developments in electronic music equipment.

Composer and instrument designer Gayle Young was studying at York University in the 70s working in an adventurous electronic atmosphere coordinated by figures like David Rosenboom, Richard Teitelbaum and James Tenney. Her interest in instrument design and electronics naturally led her to Le Caine, and she has been steadfast in documenting this aspect of Canadian music history and culture.

By the early 70s, the University of Toronto was part of an increasing network of campuses expanding their electronic music facilities. Composer Kevin Austin joins us to comment on his arrival at McGill Electronic Music Studio in Montréal in this period. Established by composer — and enthusiastic Le Caine supporter — István Anhalt, the McGill EMS was in a stage of transition away from Le Caine instruments, and Austin will add to the dialogue regarding Le Caine’s impact.

Gayle Young composes music for electroacoustics (often including soundscape), for orchestral instruments, and for instruments she designed and built in order to work with unorthodox tunings. She was a consulting composer with the Structured Sound Synthesis Project (1979–82), a graphic-interface computer music system pioneered by Bill Buxton at the University of Toronto. Her compositions have been broadcast and performed internationally. She will be a fellow of the Civitella Raneiri Foundation in Fall 2014. As publisher and former editor of Musicworks Magazine, Young has facilitated the discussion of work by many innovative composers, musicians and sound artists, and has published many articles on aspects of innovation in music. The Sackbut Blues, her biography of electronic music pioneer Hugh Le Caine, outlines a fertile period of interaction among science, technology, and music in the mid-twentieth century. Young also produced a CD of Le Caine’s compositions and instrument demonstrations.
http://www.gayleyoung.net

Kevin Austin is a Montréal-based composer, educator, theorist and arts animator. Active in EA since 1969, he is a Charter and Founding Member of the Canadian Electroacoustic Community (CEC). He met and worked briefly with Hugh Le Caine in 1969–70 while a student at McGill, studying electronic music with István Anhalt and Paul Pedersen. With many years of experience with live electronics, fixed media, acoustic and mixed pieces, his compositions now focus on point-source multi-channel, mixed works with a small Chinese instrument ensemble, and occasional pieces for virtual ensembles, from duos to four “orchestras” in multi-channel format. A number are found on <sonus.ca>. It was in the early 2000s that he saw the initial realization of a curriculum for Electroacoustic Studies at Concordia University.

Norma Beecroft is part of a generation of pioneering professional composers that firmly established Canada’s place on the world’s musical map. An award-winning composer renowned for her use of electronic sound, Beecroft has been commissioned by many of Canada’s leading artists, ensembles and organizations. She has also enjoyed a long career in broadcasting, in television and as a radio producer, commentator and documentarist for the CBC and CJRT-FM radio. Many of Beecroft’s compositions combine electronically produced or altered sounds together with live instruments, with the electronic music serving as an extension of vocal and/or instrumental sounds. Due to her intense interest in technology in music, in the late 1970s and early 1980s, she interviewed many of the world’s leading composers who were among the first to use current electronic technology in their music. This extensive research, under a tentative title Music and Technology, documents a new period of musical history, beginning primarily after World War II. From this research, Beecroft has extracted some of her comments in this presentation to the Audio Engineering Society. For her service to Canadian music, Norma Beecroft was awarded a Doctor of Letters, honoris causa, from York University in Toronto in 1996. She is an honorary member of the Canadian Electroacoustic Community.
http://www.musiccentre.ca/node/37277

David Jaeger is a music producer, composer and broadcaster who was a member of the CBC Radio Music department from 1973 to 2013. In 1978, he created the radio show Two New Hours, which was heard on the national CBC Radio Two network until Spring 2007. In the early 1970s, Jaeger established a digital sound synthesis facility at the University of Toronto, one of the first in Canada. During this time, while working at the University of Toronto Electronic Music Studio he met and became a colleague of Hugh Le Caine. In 1971, together with David Grimes, Larry Lake and Jim Montgomery, he founded the Canadian Electronic Ensemble (CEE). From 1974 to 2002 he served as the CBC Radio coordinator of the CBC/Radio-Canada National Radio Competition for Young Composers. In 2002 David Jaeger was elected President of the International Rostrum of Composers and was the first non-European ever to be named to this post in the 55-year history of that organization.
http://www.canadianelectronicensemble.com

Jim Montgomery has been involved with electroacoustic music since 1970 when he came to the University of Toronto as a graduate student, where he studied composition with Gustav Ciamaga and John Weinzweig. He is a founding member of the Canadian Electronic Ensemble (CEE), the world’s longest-lived electroacoustic group. He has composed many works combining acoustic and electroacoustic instruments and has developed several new procedures for collective composition and structured improvisation. The culmination of this series so far was Megajam (1992), which involved twenty live-electronic performers. In his parallel career as an arts administrator, Jim Montgomery served as Managing Director of the Canadian Electronic Ensemble from 1976–83 and Administrative Director of New Music Concerts from 1984–87. From 1987–2005, he was Artistic Director of the Music Gallery. He is a past president of the Canadian League of Composers and has served as a lecturer in the Faculty of Education of the University of Toronto (Electronic Media).
http://www.canadianelectronicensemble.com

Pauline Oliveros is a senior figure in contemporary American music. Her career spans fifty years of boundary dissolving music making. In the ’50s she was part of a circle of iconoclastic composers, artists and poets gathered together in San Francisco. Recently awarded the John Cage award for 2012 from the Foundation of Contemporary Arts, Oliveros is Distinguished Research Professor of Music at Rensselaer Polytechnic Institute (Troy NY) and Darius Milhaud Artist-in-Residence at Mills College. Oliveros has been as interested in finding new sounds as in finding new uses for old ones — her primary instrument is the accordion, an unexpected visitor perhaps to musical cutting edge, but one which she approaches in much the same way that a Zen musician might approach the Japanese shakuhachi. Pauline Oliveros’ life as a composer, performer and humanitarian is about opening her own and others’ sensibilities to the universe and facets of sounds. Since the 1960s she has influenced American music profoundly through her work with improvisation, meditation, electronic music, myth and ritual. Pauline Oliveros is the founder of Deep Listening, which comes from her childhood fascination with sounds and from her works in concert music with composition, improvisation and electroacoustics. Oliveros describes Deep Listening as a way of listening in every possible way to everything possible to hear no matter what you are doing. Such intense listening includes the sounds of daily life, of nature, of one’s own thoughts as well as musical sounds. “Deep Listening is my life practice,” she explains, simply. Oliveros is founder of Deep Listening Institute, formerly Pauline Oliveros Foundation.
http://paulineoliveros.us

Paul Pedersen is a Canadian composer, arts administrator and music educator. An associate of the Canadian Music Centre and a member of the Canadian League of Composers, he is particularly known for his works of electronic music, a number of which utilize various forms of multimedia. In 1961 Pedersen joined the music staff at Parkdale Collegiate Institute in Toronto. He left there in 1962 when he was appointed music director of Augustana University College, a post he held through 1964. In 1966 he was appointed to the music faculty at McGill University where he remained for the next 24 years. He served as the chairman of McGill’s theory department from 1970–74 and was head of the school’s electronic music studio from 1971–74. He served as Associate Dean of the music school from 1974–76 and then was Dean of the school from 1976–86. He was also director and executive producer of McGill University Records from 1976–90. In 1990 Pedersen left McGill to become the Dean of the music school at the University of Toronto.
http://www.musiccentre.ca/node/37322

Social bottom