Toronto Electroacoustic Symposium 2011
Abstracts and Biographies
A co-presentation of the Canadian Electroacoustic Community (CEC) and New Adventures in Sound Art (NAISA), the 5th Toronto Electroacoustic Symposium (TES) was held in parallel with the 13th edition of Sound Travels, NAISA’s annual Festival of Sound Art. The Keynote Speaker for TES 2011 was Jonty Harrison.
10–13 August 2011
Artscape Wychwood Barns
601 Christie Street, Toronto ON
DAY 1 — Thursday, August 11th
Click here to return to the symposium schedule.
“The CEC with its head in the clouds”: Where we came from, where we are, and where we can head
by Kevin Austin
In the early 1980s, electroacoustics (EA) was an under-recognized art. The CEC was created to respond to existing and future needs and interests of the EA community in Canada, and to an extent internationally. In the past 25 years, through a succession of Boards and Presidents, the CEC has grown and has successfully met many of these initial aims. There is a strong network of communication and support, publication, documentation, archiving, encouragement of a younger generation, and a strong international profile. This presentation will relate some of the histories and mysteries of these times up to the present, and with this as a foundation, some possible paths to the future will be proposed. Today’s presentation takes place about 25 years after the first major national conference here in Toronto and is dedicated to the recently departed pioneers, but more so to the present and future practitioners of the arts of sound through loudspeakers.
Kevin Austin, Montreal-based composer, teacher, arts animator. Degrees in composition from McGill University and has been active in electroacoustics for over 40 years. Now teaching EA at Concordia University in Montreal. Founding member of the CEC. Concert producer for four decades. Widely interested in many things from sound, to movement, to light, arts, nature and sciences. Recently stepped into the 21st century with his purchase of an iPad and is advancing his skills with Angry Birds©.
Session Chair: Bruno Degazio. Click on linked titles to read full articles published in eContact! 14.4 — TES 2011.
by Tomás Henriques
The Sonik Spring is a new digital musical instrument that focuses on the issue of feedback in interface design as a condition to achieve a highly responsive and highly expressive performance tool. The instrument primarily emphasizes the relationship between kinesthetic feedback and sound production while linking visual and gestural motion to the auditory outcome. The Sonik Spring is portable, wireless and comfortably played using both hands. It features a 15-inch coil that can be compressed, expanded, twisted or bent, in any direction, allowing the user to combine different types of intricate manipulation. Playing this instrument is meant to feel like holding and shaping sound with one’s own hands. The spring is attached at both ends to hand controller units each containing five push buttons and sensors that detect spatial motion in three dimensions (accelerometers and gyroscopes). The Sonik Spring can be used in three “performance modes”: Instrument mode, Sound Processing Mode and Cognitive Mode.
Tomás Henriques is a composer, researcher and musician. His work includes pieces for acoustic instruments and for electronic and mixed media, being commissioned and played in concerts and music festivals both in the USA and Europe. He obtained a PhD in music composition from the University at Buffalo in 1997. His research in music technology has focused on using sensor technologies to create innovative electronic instruments. Dr. Henriques won First Prize at the 2010 Margaret Guthman Musical Instrument Competition for the invention of his Double Slide Controller, a trombone-like electronic instrument. Dr. Henriques teaches music theory and composition at Buffalo State College and is the coordinator of the minor in Digital Music Production and the director of the Digital Music Ensemble at the same institution. From 2009 to 2011 he was a research fellow at Carnegie Mellon University in Pittsburgh, working in the area of real-time interactive music composition.
Electroacoustic and Computational Feedback Synthesis
by Campbell Foster
An interactive demonstration lecture about Electroacoustic and Computational Feedback Synthesis. Topics to be discussed include previous research results milestones, origin, electroacoustic realization, techniques, methods, design, computational realization, integration, compositional æsthetic, the identification of waveform phenomenon against the signatures of space energy data visualization, and natural branching and tree growth structures found in the spectrogram analysis of generated sounds.
Campbell Foster is a Canadian sonological researcher, composer, performer, interactive systems designer, educator and inventor of the Electro-acoustic Sheet Metal Feedback Phone. Campbell studied electronic music with Anne Southam at the RCMT in 1975, and at York University to earn a special honours degree in Electronic Music, Composition and Computer Science. Campbell was Music Director of Computer Music Research for Canada’s own Mcleyvier CMI (1982–86). His sonological research directive is to discover and research the methods, æsthetics and results of electroacoustic and computational feedback synthesis, to interactively deliver and impart the sonological research concepts, methods, techniques and discoveries of feedback synthesis through a Research Blog RS2, sound and media works, field instrument installations, exhibits, lectures and performances for the scientific and æsthetic communities.
What happens when you turn an electroacoustic composition into an acoustic composition? What happens when acoustic music is transformed into an electroacoustic composition? How have composers found creative solutions to the problems inherent in this translation/re-composing process? This paper/presentation is inspired by reflections on my own practice as a composer; I recently composed a (completely acoustic) choral work based on one of my earliest ventures into electroacoustic composition. I wondered what other music has resulted from composers “translating” music (and sound) between acoustic and electroacoustic media. I would like to discuss some ways in which this has been done and to what degree the ideas change of stay the same when they are incorporated into the new piece. I am curious how these sorts of creative processes may blur the lines of musical genre: (How do we identify, categorize, and analyze a piece that was originally conceived in one medium, but now exists in another?) This also raises the question of how technology and the ways we can manipulate sound effects the compositional process for all composers, even for composers who are not creating electroacoustic works. Many composers of acoustic music draw inspiration from electroacoustic work, and I would like to show ways in which this interrelationship has manifested. I will also discuss research relevant to this topic and point out areas where there is potential for further research and/or creative exploration on this topic.
Fiona Ryan is a composer, improviser, and performer from Nova Scotia. She creates vocal and instrumental works for soloists and ensembles of all sizes, as well as music for improvising musicians and electroacoustic music. Her music has been performed in various venues in Canada as well as in the UK. She has a wide variety of interests, and this eclecticism can be heard in many of her compositions. Fiona completed a Bachelor of music at Dalhousie University, a Master of Music in Composition at the University of Newcastle studying mainly with Agustin Fernandez, and is currently a student in the Doctoral program in composition at the University of Toronto, where she has studied with Chan Ka Nin, James Rolfe, and Christos Hatzis. Her recent and current research includes a project on musical influences and experiences of Canadian women composers, and explorations in composing based on narrative and literary forms.
14:30–15:30 • Paper Session #2: Analysis
Session Chair: Mitch Renaud. Click on linked titles to read full articles published in eContact! 14.4 — TES 2011.
Cathy Lane’s recent book Playing with words: The spoken word in artistic practice (2008) captures some of the central æsthetic and creative elements of working with language in sound poetry, live voice with electronics, and studio composition, among others. Soundscape artist Hildegard Westerkamp is no exception with her numerous works that include poetry, autobiographical narrations, or articulated soundwalks. Westerkamp’s MotherVoiceTalk (2008) is a fifteen-minute work that was commissioned to celebrate the output of Japanese-Canadian Roy Kiyooka. She combines several narratives from six main sound sources: 1) Kiyooka paraphrasing his book MotherTalk; 2) recordings of his interviews with his mother, Mary; 3) Westerkamp’s voice reflecting on similarities she perceives between her and Roy; 4) recordings of her interviews with her mother, Agnes; 5) excerpted recordings of Roy playing a zither and a whistle; and 6) soundscape material. My analysis of this work examines Westerkamp’s negotiation of several boundaries that are often blurred in the electroacoustic medium: public vs. private; self vs. other; and time and space.
First, I discuss how by blurring the boundaries between public and private space, Westerkamp creates a unique “sense of place” in each work. The development of transportable recording equipment has contributed to the erasure of the boundary between public (e.g., natural) and private (e.g., domestic) spaces. Second, I demonstrate how MotherVoiceTalk enacts a self-reflective process through which Westerkamp defines her identity — “self.” In MotherVoiceTalk, Westerkamp weaves multiple narratives that address several similarities in Kiyooka and Westerkamp’s artistic and personal lives, including the importance of mothers as life givers and “second persons” (Lorraine Code), and the immigrant experience. Finally, I suggest that Westerkamp’s treatment of the voice and the use of soundscape material both maintain and transcend the boundaries of time, space and perspective in the multiple narratives in MotherVoiceTalk.
Alexa Woloshyn recently completed her doctoral studies at the University of Toronto, where her research on the voice and the body in contemporary Canadian electroacoustic music was funded by the Social Sciences and Humanities Research Council of Canada, and by the University of Toronto. Articles on Robert Normandeau and Imogen Heap have appeared in eContact! and in the Journal on the Art of Record Production, respectively. Alongside her musicological pursuits, Alexa performs regularly as a singer and pianist. She has presented at national and international conferences, including IASPM-Canada, IASPM-US, CUMS and the Art of Record Production conferences. Alexa has been busy teaching courses in popular music, music history and the musical avant-garde at the University of Guelph and the University of Toronto Scarborough.
The combination of sampling art and collage is a fundamental principle of the musical work of the American composer Henry Gwiazda, who calls himself a samplerist. The importance of Gwiazda lies not only in his artistic contributions, which raise the collage technique to an æsthetic concept, but also in the way in which he, as a sampling composer, lends the musical material a three-dimensional identity. Gwiazda’s longing for an audio software that would permit the spatial positioning of sounds in space was fulfilled by Focal Point, a software developed by Bo Gehring, Les R. Titze and Garry D. Titze in the 80’s in the US. The software relies on the so-called Convolution Technique, with which a mono signal can be converted into a stereo signal positioned in three-dimensional space. The stereo signal, heard with headphones or less effectively with two loudspeakers, gives the listener the impression of hearing the sounds coming from different directions and places in three-dimensional space. The approximately nine-minute long composition buzzingreynold’sdreamland — the focus of this analysis — evolved in 1994, and marks, if not the culmination point, at least an important station of the career of Henry Gwiazda.
Bijan Zelli was born in Teheran, Iran in 1960. After completing his studies in electrical engineering at Sharif University of Technology in Teheran, he immigrated to Sweden, where he changed his career from engineering to music. He received his Master’s in Music Education in 1996 and then moved to Berlin for further studies in Musicology. He started his doctoral degree under Professor Helga de la Motte-Haber’s supervision and took his PhD degree in 2001. His dissertation, “Real and Virtual Spaces in the Computer Music,” is an exclusive and analytical approach to how spatialization works in electroacoustic compositions. Bijan Zelli has performed many music lectures in different countries including, Sweden, Germany, Iran and the USA. His field of research is focused on Western Classical music, mostly concentrated on different aspects of modernism. He moved to the United States in 2007 and currently works as music educator and independent researcher in San Diego, California.
15:45–16:45 • Paper Session #3: Algorithms
Session Chair: Alan Tormey. Click on linked titles to read full articles published in eContact! 14.4 — TES 2011.
OpenGL is a powerful graphics library available for many computer operating systems. It leverages the power of the today’s highly parallel Graphic Processor Units (GPU) to provide powerful realtime performance. To date these capabilities have been harnessed for commercial applications such as computer games and video processing, but they have not been much applied to algorithmically generated art. This paper describes a method of integrating OpenGL within a MIDI-based algorithmic composition environment, with the aim of producing such art, known sometimes as “Visual Music” — abstract, algorithmically generated full-motion images. Major technical challenges and solutions are described in detail. Two current projects which employ this experimental implementation are described.
Bruno Degazio is a composer, sound designer and educator. His film work includes the special-effects sound design for the Oscar nominated documentary film, The Fires of Kuwait and music for the all-digital, six-channel soundtracks of the IMAX films Titanica, Flight of the Aquanaut and CyberWorld 3D. His concert works for traditional, electronic and mixed media have been performed throughout North America and Europe. As a researcher in the field of algorithmic composition he has presented papers and musical works at leading international conferences. He is a founding member of the Canadian Electroacoustic Community and of the Toronto music ensemble Sound Pressure. He has written on his research into automated composition using fractals and genetic algorithms. Bruno Degazio is the designer of The Transformation Engine, a software musical composition system with application to algorithmic composition and sonification.
Towards a Generative Electronica: A Progress Report
by Arne Eigenfeldt
When creating a generative system, rules are required to limit the possible choices; in most cases, these rules are used to generate new compositions in the style of the composer. One difficulty with generative systems is validating the success of the system — in other words, whether the system has interpreted the rules correctly, or whether the rules themselves accurately model the desired style. In the above mentioned system, it is really only the creator of the system that can make this judgement: listeners can reject the musical result, but the system’s creator can argue that they are making æsthetic judgements of the music, rather than the system. However, if the aim of the system is to create music consistent within a given genre, it is possible to judge the success of the system — both artistically and practically — by the relationship of its output to the original corpus. We are pursuing the potential of creating software that generates electronic dance music in specific styles. We have selected 100 complete musical examples in the genres of Breakbeat, House, Drum & Bass, and Dubstep, and are using a combination of machine and human analysis of these works to derive rulesets, which, in turn, are used to generate new music consistent within the genres. Unlike the work of David Cope, who used a set corpus of existing music by composers such as Bach, Mozart, Beethoven, and Joplin to create new compositions through recombinance — stitching together music from given examples — we are using generative methods — including probabilistic methods and genetic algorithms — to create new music. This presentation will discuss how our methods differ from other generative music systems, and other music information retrieval (MIR) programs, and present musical examples of our ongoing research.
Arne Eigenfeldt is a composer of live electroacoustic music, and a researcher into intelligent real-time music systems. His music has been performed around the world and his collaborations range from Persian Tar masters to contemporary dance companies to musical robots. His research has been presented at conferences such as ICMC, NIME, SEAMUS, ISMIR, EMS and SMC. He is an associate professor of music and technology at Vancouver’s Simon Fraser University (Canada) and is the co-director of the MetaCreation research group, which aims to endow computers with creative behaviour.
http://www.sfu.ca/~eigenfel | http://metacreation.net
DAY 2 — Friday, August 12th
Click here to return to the symposium schedule.
09:30–11:00 • Keynote Lecture
Click on linked titles to read full articles published in eContact! 14.4 — TES 2011.
According to the musical history books, especially those in English or German, the most significant achievement of the electroacoustic medium was to add to music the means to control timbre and space. In actual fact, acousmatic music opened up far more than this — not least, a whole new paradigm for handling and interacting with sound directly, instead of through a system of notation. Yet, more than 60 years after Schaeffer, composers and theoreticians are still trying to find the vocabulary and methodologies for thinking about and discussing the issues involved in a medium whose poetic, language and materials range far beyond what was previously available under the label of ‘music’. In particular, the issue of ‘space’ — its meaning, handling and importance — all but ignored outside acousmatic music, continues to divide practitioners, who cannot even agree whether it is part of the compositional process or merely an issue of performance practice.
Jonty Harrison (*1952) studied at the University of York (DPhil in Composition, 1980). Between 1976 and 1980 he worked at the National Theatre and City University, London. In 1980 he joined the Music Department of the University of Birmingham, where he is Professor of Composition and Electroacoustic Music and Director of the Electroacoustic Music Studios and BEAST. He has won several national and international prizes (Bourges, Ars Electronica, Musica Nova, Lloyds Bank Composers’ Award) and been commissioned by leading organisations and performers. His music appears on three solo albums (empreintes DIGITALes, Montréal) and on several compilations (NMC, Mnémosyne Musique Média, CDCM/Centaur, Asphodel, Clarinet Classics, FMR, Edition RZ and EMF).
11:30–13:00 • Paper Session #4: Space
Session Chair: Arne Eigenfeldt. Click on linked titles to read full articles published in eContact! 14.4 — TES 2011.
The Audio Spotlight in Electroacoustic Performance Spatialization
by Darren Copeland
My practice of spatialization has concentrated on developing sound spatialization techniques that provide alternatives to the traditional model of projecting sound with a mixing console. Since 2006 this has been realized through a combination of computer control and a physical interface for live gestural control in performance by controlling a customized Max/MSP patch made by Benjamin Thigpen (based on his spit~ object) with the Polhemus Patriot sensor which is worn on the hand of the performer and encased inside a glove. Previous to 2006, spatialization was realized by using a fully automated method with the Richmond Sound Design Audiobox and the control software ABControl developed by Chris Rolfe of ThirdMonkSoftware. Although that method had a lot of compositional power it lacked the live presence and immediacy of the system currently in use. In 2011, I received new research funding in order to expand the current live spatialization system which would also in turn challenge some widely held conventions for spatialization in live sound art performance. One of those conventions is that the spatialization and sound projection of studio-composed works is almost invariably a solo performance in live applications. Another convention in these same instances is that the loudspeakers are stationary and that the movement of sound is realized entirely through phantom or digitally encoded spatial imaging. This contrasts with the everyday acoustic environment where the movement of sound is linked with the physical movement of an object as it makes sound.
My latest research centres around challenging these conventions in a way that is economically feasible and not overly unwieldy to implement under the constraints of live performance. By having two performers physically manipulate the positioning of two handheld loudspeakers, the movement trajectories of sounds can be linked (like in the natural environment) to the physical movement of an object in space. These speakers have a dispersion pattern that is exceedingly more narrow in their dispersion pattern than conventional PA speakers or studio monitors. The two performers would collaborate in performance with a third performer who would manually distribute sounds using a gestural controller to both a fixed conventional speaker array and to the speakers operated by the live performers.
Two loudspeaker models for this project are being compared and evaluated at the present time for the performers to use. They are the Holosonics Audio Spotlight and the Panphonics Sound Shower. The two models are both lightweight and easy to manipulate in one\’s hands for a one hour performance period. Understandably they have compromised low frequency response so they must still be used in tandem with a full range system of conventional loudspeakers. One of the models will be demonstrated in this presentation for the audience to better learn the unique characteristics of these speakers. The intended outcome of the project is to raise the standard of performance interpretation in my approach to live sound art spatialization. The project will do this by enhancing the visual correlation between the physical movements of performance and the resulting listening experience for audiences and by heightening the life-like illusion of auditory spatial imaging and movement in performance. It is my hope that this new method will create an intriguing shift in spatialization performance practice from solo to ensemble performance and with that it will pose a number of new performance possibilities and challenges for live sound art spatialization. Needless to say I expect that it will have an impact on my approach to creating sound art works for performance in the future.
Darren Copeland is a Canadian sound artist who creates work for radio, performance and installations, with a focus on soundscape composition and multi-channel spatialization. His concert and radio works have been commissioned and presented worldwide (ZKM, Kunstradio, Engine 27, La Muse en Circuit) and are published on the internationally recognized empreintes DIGITALes label. He is the Artistic Director for New Adventures in Sound Art (NAISA), the Toronto-based presenter of Deep Wireless, Sound Travels, and SOUNDplay. In this capacity he has developed a unique system for multi-channel performance spatialization which uses a gestural controller and two Audio Spotlight performers to move sound around a performance hall. Copeland has created site-specific sound installations for Kitchener City Hall (Open Ears Festival), Metro Toronto Convention Centre (Interactive05 during Toronto Art Fair) and, with Andreas Kahre, he co-created a permanent sound installation that uses four Audio Spotlights at the new Queen Elizabeth Pool in Edmonton.
Exploring links within contemporary sonic art There is a growing body of compositional work and theoretical research that draws from both Acousmatics and forms of electronic dance music. Much of this work blurs the boundaries of electronic music composition, often with vastly different æsthetic outcomes. This paper will comment on this research and will aim to identify issues arising from this type of work. It will explore the compositional and social links that can be perceived within contemporary sonic art and will offer a more sympathetic way of engaging with cross-genre research and composition.
The central theme of this research is involved with embracing how we might use these links to widen access to acousmatic music and to aid pedagogic practice. In particular, the research is concerned with documenting compositional ideas within Intelligent Dance Music (IDM), and how these practices might relate to acousmatic music composition. Particular attention will be paid to how we could use IDM to teach the concepts of acousmatic music to students studying the sonic arts. The paper will conclude with a discussion of the NoiseFloor festival and will pay particular attention to the crossgenre electronic music festival.
Ben Ramsay graduated from Middlesex University, London, with a BA (Hons) in Sonic Arts in 2001, and is currently lecturing in Music Technology at Staffordshire University in the West Midlands, UK. His research is centred around acousmatic music composition and the exploration of compositional relationships that exist in modern forms of sound art. He is currently studying for a PhD in Electroacoustic composition at De Montfort University, Leicester, UK, under the supervision of Prof. Simon Emmerson.
Waves, Ripples, Beats: Psychoacoustic phenomena produced by electronic means as compositional material, and the potential of sine waves to trace the acoustical properties of a given room
by Chiyoko Szlavnics
This paper will describe how my use of ratio-based frequencies, unadorned sine waves, and very slow glissandi has served as means of exploring a special kind of microtonality using Just Intonation based ratios, one which has varying degrees of simplicity and complexity, one which highlights psychoacoustic phenomena, such as beating and fusion, and one which can produce a heightened awareness in the listener of the acoustical properties of the space where the pieces are presented.
Chiyoko Szlavnics graduated from the Faculty of Music at the University of Toronto in 1989. She studied composition with James Tenney from 1994 until 1997, when she received a year-long Fellowship Grant from the Akademie Schloss Solitude in Stuttgart, Germany. After her residency, Szlavnics moved to Berlin, and attended Walter Zimmermann’s composition seminars at the University of the Arts (1999-2000). Since 2004, Szlavnics has used line drawings as the basis for her compositions, writing for ensembles that range from small chamber groups to chamber orchestra, often incorporating sinewaves into her pieces. She has also produced several multi-channel electronic sound installations. Her work features a kind of microtonality derived from Just Intonation ratios, using glissandi and clusters to bring out acoustical phenomena such as beating, which, in turn, creates a surprising layer of rhythmic activity in her music. Szlavnics’ ouevre has gained increasing recognition in recent years, and has been featured in concert across Europe and North America, as well as in radio broadcasts and internet podcasts. Her visual artworks have also been gaining recognition since 2009–– they have appeared in publications, and will be exhibited in Berlin and London at important drawings’ galleries in the autumn and winter. Starting this autumn, Szlavnics will be teaching composition seminars at the University of the Arts in Berlin for two semesters, and in 2012, she will be a Fellow at Villa Aurora in California.
16:00–17:00 • Paper Session #5: Listening to the World
Session Chair: Kevin Austin. Click on linked titles to read full articles published in eContact! 14.4 — TES 2011.
I explore the sonic environment of the town of Ixtlan de Juarez, in the state of Oaxaca in southern Mexico, beginning with a brief description of the town and local area. I then discuss conceptions of sound as power, communication and exuberance in southern Mexico. I outline the ongoing negotiations of indigenous, community and local identities in Ixtlan, a debate of some urgency in an era of encroaching globalisation. The comunero political system is valorised as communal, truly democratic and egalitarian, as distinct from centralised bureaucratic state power such as that found in the state governments in Oaxaca City. Yet the interminable loud broadcasts of news, announcements, advertisements, prize draws and music from the loudspeaker mast at the Palacio Municipal revealed a complete centralisation of sonic power. I describe how attention to sonic practices in the town contrasts with claims made about locality, local communal power and resistance to external forces of state and capital. I then discuss debates surrounding environmentalism, exploring the different conceptions at play in new ecotourism businesses. Locals seek to gain from preserving and presenting something of value to outsiders, and the interplay between values and perceived values of locals and tourists will be a key factor in the political economy of ecology in the region. The clash between environmentalisms is obvious in the differing approaches and expectations of sound in environment: on one hand preservation for sustainable exploitation, on the other, a romanticised idea of protected natural tranquility. Finally, I offer some reflections on the implications of attention to sonic environments, with respect to important negotiations and representations of identity in Ixtlan. I conclude that attentiveness to sound can reveal neglected perspectives and offer new insights to how discussions are framed and conducted.
Owen Coggins was born in Luton in the UK. He studied Philosophy at King’s College, University of London, and completed an MA in Religions at the School of Oriental and African Studies, University of London, where he focused on religious music/music in religion: in particular, musical epistemology in Rastafari, death, apocalypse and political liberation in pre-war African American gospel blues. and wrote a dissertation on transnational musical religious practices, based on time spent playing music with a London-based group of Qawwali musicians at Melas, festivals, weddings and other functions. Owen is currently based in Toronto, where he is currently researching religious sounds and images in noise/drone/doom music. He has lived, worked and played trumpet in the UK, Nepal, Bolivia, Mexico before coming to Canada.
The Soundson Project
by Wiska Radkiewicz and Andrea Cohen
The Soundson project is a web-based environment in which composers or amateurs/ students living in different countries create a common sound composition through an ongoing exchange of sounds. This project was created as an experimental approach to audio sharing and collaborative composition. The exchange of sounds does not take place in real time, but allows the composition to develop in a building process over a period of time. The participants in the Soundson exchange use sounds captured from the real world as raw sound materials, including spoken word, sounds produced by objects, environmental sounds or any captured sound event coming from the audible world. This practice creates musical results, which break the boundaries of traditional electro-acoustic music, blending in elements from radio art, audio art and sound poetry.
In a pedagogical context the goal of the SoundSon project research is to explore the educational potential of shared composition through sound exchanges between groups having different cultural backgrounds. In its pedagogical application, the SoundSon program has been implemented in different countries (USA, Mexico, Argentina and Europe) in the form of sound exchanges between elementary, middle school, high school and university students. The benefits of the program are threefold: musical, technological, and cultural. In the artistic domain, we experiment with a similar process to explore the various forms of collaborative composition, and our last work, “City-Soundings”, will be presented during the Toronto Electroacoustic Symposium.
The interactive site of the Soundson project is hosted by Columbia University, New York. Since 2008 the IoCT (Institute of Creative Technologies) at De Monfort University, Leicester, UK has included the SoundSon Programme among their research projects to promote its development.
Andrea Cohen is a pianist, sound artist and radio producer. Born in Argentina, she has been living in Paris since 1974. She is an author and performer of several musical theatre works in which musical and theatrical elements are integrated into a personal, pluridisciplinary language. She has also composed incidental music for theatre, video and radio, and has composed an opera, Fois il était une deux trois, played by children. As author of numerous radio programs and works, she has been collaborating with Radio France (France Culture) since 1985. In 2005, she was awarded a doctorate by the University Paris-Sorbonne, where she successfully defended her theses “Composers and Radio Art.” From 2007 to 2011 she was an Associate Researcher of the IOCT (Institute of Creative Technologies) at De Monfort University (Leicester, UK), where she developed the Soundson Program with Wiska Radkiewicz.
Wiska Radkiewicz is an electronic music composer, sound and video artist. She received training at the Music Academy of Warsaw, Poland (music theory), University of Paris-Sorbonne (musicology), Groupe de Recherches Musicales — Conservatory of Paris (electronic music composition), City University of New York (computer music composition) and Princeton University, where she obtained a doctorate degree in music composition. Her interests range from musical improvisation, electronic composition and music pedagogy to radio and video. From 2007 to 2011 she was an Associate Researcher of the IOCT (Institute of Creative Technologies) at De Monfort University, Leicester, UK, where she developed the Soundson Program with Andrea Cohen.
DAY 3 — Saturday, August 13th
Click here to return to the symposium schedule.
14:30–16:00 • Paper Session #6: Curation
Session Chair: Darren Copeland. Click on linked titles to read full articles published in eContact! 14.4 — TES 2011.
The notion of the Electroacoustic Concert bases itself on a single and simple premise: that we, as a group or as a collective, should listen to audio art that was specifically created to sound ideal played through an array of speakers. There is nothing inherently incorrect about this premise. In fact, this premise is rather magnificent. We should note the suggested democracy in such a premise: any audio, without respect for musical or social context, can be experienced on a formal level in the Electroacoustic Concert. However, the fundamental problem is that this simply doesn’t happen. There are covert parameters placed on that which is appropriate for such a listening context. To which one must say: fair enough. If these additional parameters exist within a genre of listening contexts, they must serve some æsthetic purpose, some way of furthering a pursuit of the audio arts that would be compromised if corrupted. However, this paper will argue that that is not the case at all. In fact, the notion of the Electroacoustic Concert effectively denies the necessary genre dialogue that would allow electroacoustic music to grow, expand, and change.
This dialogue, though, absolutely exists, though largely through other genres listening to and gleaning many fascinating compositional and studio techniques from electroacoustic music. Specifically, the work of David Toop and Paul Hegarty take a broad view of studio-based composition and frame in such a way that electroacoustic composition can and should be respected within and dialoguing with a larger scope of contemporary audio art, and this paper will propose that a more contemporary view of the Electroacoustic Concert should include a broader range of compositions to facilitate this dialogue.
Broadly, there will be a proposal that certain subsets of contemporary acoustic compositions, dance music, hip-hop, and pop music strive for the same goals as those sought after in electroacoustic music, and further that those genres are already in dialogue with the practice, and thus should be engaged accordingly. Specifically, this paper will go in to detail discussing what it means for audio to be conceived for the speaker, parsing those attributes in the above genres which can be seen as in dialogue with electroacoustic music, ultimately arriving at the conclusion that, to properly contextually asses various musics, the musical compositional motivation and acoustic compositional motivation can and should be viewed independently.
The goal, as ever in electroacoustic music, is to be those who are truly listening, to that which extends beyond social context, and can effectively judge that which is an effective (and affective) use of the tools and techniques which can generate some sort of æsthetic change through audio art. The Electroacoustic Concert, from a curatorial standpoint, can be seen as a gesture of sharing; that there is a moment of collectivity presented, in which the collective shares a unique acoustic experience. But this simple artistic gesture can include a much more sophisticated dialogue than it currently does. From where we now stand, we’re inside a self-made ghetto, and it’s only through a more democratic curatorial vision that we can get out.
Matthew Griffin is a musician and composer from Kitchener, Ontario, Canada now living and working in Chicago. He got his BFA from Simon Fraser University’s School for the Contemporary Arts and his MFA from The School of the Art Institute of Chicago. In addition to his work with Electricity is Magic, he is the Audio Curator with LiveBox Gallery. His recent exhibitions include a commission from the Experimental Sound Studio’s Florasonic audio installation series at the Lincoln Park Conservatory; his audiovisual work “Empire” showed in Seoul, South Korea as part of the [chicago] group exhibition presented by Prak-Sis Contemporary Art Association; and he premiered his new piece for Solo Trombone and car stereos, “Second Line for We”, in December.
by Steven Naylor
This paper explores some of the ways ageism may appear in the practice and study of electroacoustic music. Its perspective is rooted in direct involvement in the subject area, similar to participantobservation work. However, the analysis is not objective ethnography or sociology. Rather, it is fundamentally a series of subjective observations and reflections about an area of artistic practice of personal importance to the author. In its broadest sense, ageism is discrimination based on age. Originally associated with prejudice towards the elderly, the term is increasingly applied to any situation where a group or individual’s competence, desirability, acceptance, or skill is assessed primarily (and presumably unfairly) on the basis of length or stage of life — whether young or old — rather than pragmatic criteria. To fully understand the potential range of ageism in electroacoustic music, we must extend that definition to include not only a priori assessments of artists and scholars based on their chronological age, but also pre-judgements of the techniques and technologies they use, and of the stylistic trends or approaches evident in their work. Its impact in those additional areas is amplified both by the accelerating pace of information dissemination, and by frequent shifts in our technological and stylistic expectations. This extended definition provides us with three distinct, though ultimately interlinked, categories of ageism to consider: chronological; technological; and stylistic. We consider these manifestations of ageism from two complementary pairs of perspectives: internal vs. external assessment, and inclusionary vs. exclusionary group behaviour.
Halifax-based composer / performer Steven Naylor composes for concert performance, and creates scores and sound designs for professional theatre, television, film, and radio. His personal work is presently centred on radiophonic and acousmatic works. He is also active as a pianist, performing music that blends improvisation and through-composition. Naylor completed the PhD in Musical Composition, supervised by Jonty Harrison, at the University of Birmingham, UK. Naylor is a former President of the CEC.
Public presentations of electroacoustic art (EA) music exist in a wide range of formats — from simple studio-based show-and-tells, to low-key happenings in gritty artist-run venues, to technically elaborate multi-channel concert hall performances. Regardless of venue, there is a growing interest in creating a sense of the presentation existing as an ‘event,’ moving beyond the basic audience/performer, ear/loudspeaker relationships to one that brings the listener into an active role in the performance, integrating elements of cross- or multi-sensory perception. This practice is becoming increasingly popular in visual and contemporary art, with a large number of artists integrating olfactory elements into their installations including Koo Jeong A & Bruno Jovanovik, Haegue Yang, and Federico Diaz (ARTnews, March 2011).
I am interested in documenting the growing number of EA concert events that combine sound with chemosensory elements (taste and smell). Since 2009 the number of EA presentations that have featured a curated concert with specially selected wine, food or liquor pairings to accompany each piece has grown rapidly. By combining elements of performance theory, sensory perception, wine culture and electroacoustic listening practices, this paper examines 5 recent public presentations in Canada and the USA that have combined food, drink and sound to create a cross-sensory concert experience: the Experimental Sound Studio’s Vinosonic I and II, Holophon’s Wine & JTTP-based Friend-raiser, their presentation of 60x60 at the Bushwakker Brew Pub, as well as Julia Miller’s Articular Facet 3. These events will serve as case studies for this paper, with comparisons made to other cross-sensory installations and presentations from artists working across contemporary art disciplines.
In addition to outlining contemporary electroacoustic music listening practices, presentation strategies and curatorial methodologies, I will outline the difficulties and rewards of attempting to curate a multi-sensory presentation environment, as well as outlining the strategies curators and composers have employed to impose or obscure meaning in the perceptual cross-talk between chemo- and vibratory-sensation. Using curatorial statements, artist interviews, audience feedback and first-person recollections, this paper outlines both how composers have created work for a particular sensory combination and to the methods employed by curators when they make their choices to combine certain sounds and flavours. I will also examine wine-tasting practices, terminology and the role of sommeliers in the curatorial process. I aim to determine whether this quasi-synesthetic curatorial model has value in enhancing the perception of both sound and taste. Is this a sustainable method of EA concert curation, or is the offer of wine or liquor simply a ploy to encourage the normally reserved elecroacoustic concert-going audiences to show up to an event? Other considerations will be brought into these case studies, including the importance of venue selection, and how these events differ from a popular music experience in a bar or restaurant. The final component of this paper will look to the future of this cross-sensory practice. Can these events bring in new audiences — further developing the appreciation of EA music in a long-term, sustainable way, or is this a flash in the pan, a momentary affinity for a new or unusual concert experience?
Eric Powell is a sound artist and composer working with a wide variety of presentation methods including stereo and multi-channel tape, live performance with integrated electronics, as well as site-specific and interactive installations. In 2008 he received his MFA in electroacoustic composition from Simon Fraser University. Recently, he was commissioned by the Saskatchewan Arts Board to write a piece for multi-channel tape and chamber ensemble exploring the aural character of Saskatchewan. His work has been heard throughout Canada, Mexico and the USA with recent presentations at Toronto’s New Adventures in Sound Art, Hamilton’s James Street North Art Crawl, the Experimental Sound Studio in Chicago and the CanAsian Dance Festival in Toronto. He is a founding member of the Saskatchewan-based Holophon Audio Arts organization, and co-founder of electricity is magic — two groups dedicated to the creation and presentation of sound-as-art.