TES 2013 is a co-presentation of the Canadian Electroacoustic Community (CEC) and New Adventures in Sound Art (NAISA) and is held in parallel with the 15th edition of Sound Travels, NAISA’s annual Festival of Sound Art. The Keynote Speaker for TES 2013 is Francis Dhomont.
All activities take place at Theatre Direct’s Wychwood Theatre [ unless otherwise indicated ], in the Artscape Wychwood Barns
601 Christie Street — Studio 176, Toronto [ Google maps ]
Questions about the schedule or any other aspect of the symposium can be directed by email to Emilie LeBel, Chair of the symposium committee. For any registration or Sound Travels questions, contact Nadene Thériault-Copeland.
Day 2 — Thursday 15 August
Click here to return to the symposium schedule.
09:30–10:45 • Session #1: Creative Practices
Chair: Steven Naylor
It is possible to think of the two extremes of the world of sound as the inner domain of microsound (less than 50 ms) where frequency and time are interdependent, and the external world of sonic complexity, namely the soundscape. In terms of sonic design, the computer is increasingly providing tools for dealing with each of these domains, such as granular synthesis and multi-channel soundscape composition. The models of interaction involved with the complexity of each of these domains are instructive and will be presented with sound examples.
Barry Truax is a Professor in the School of Communication, and formerly the School for the Contemporary Arts at Simon Fraser University, where he teaches courses in acoustic communication and electroacoustic music. He has worked with the World Soundscape Project, editing its Handbook for Acoustic Ecology, and has published a book Acoustic Communication dealing with all aspects of sound and technology. As a composer, Truax is best known for his work with the PODX computer music system that he has used for tape solo works and those which combine tape with live performers or computer graphics. In 1991 his work, Riverrun, was awarded the Magisterium at the International Competition of Electroacoustic Music in Bourges, France, a category open only to electroacoustic composers of 20 or more years experience. Truax’s multi-channel soundscape compositions are frequently featured in concerts and festivals around the world.
This lecture-recital is based on Breathwood, a 2010 composition for solo amplified bass clarinet, acoustic ensemble and surround-sound electroacoustic sounds. To be discussed is the compositional strategy of extending the acoustic ensemble by integrating the ensemble materials into the immersive electroacoustic sound world by means of the amplified bass clarinet soloist. While the soloist can act as a part of the ensemble, in terms of materials and textures, the amplification enables the soloist’s sound to be projected out from the loudspeakers surrounding the audience. In this way, the bass clarinet can also fuse with the electroacoustic material. The studio-produced materials were created from acoustic sounds, including recordings of the bass clarinet.
James Harley is a Canadian composer presently teaching at the University of Guelph. He obtained his doctorate at McGill University in 1994, after spending six years working and studying in Europe. His music has been awarded prizes in Canada, USA, UK, France, Poland and Japan, and has been performed and broadcast around the world. A number of Harley’s works are available on disc and his scores are primarily available through the Canadian Music Centre. He has been commissioned by numerous organizations in Canada and elsewhere. He composes music for acoustic forces as well as electroacoustic media, with a particular interest in multi-channel audio.
11:00–12:30 • Keynote Lecture
Chair: Kevin Austin
J’ai longtemps considéré que ma production musicale pouvait être divisée en deux grandes catégories : l’une que j’appelle abstraite et l’autre figurative. Par œuvres abstraites, j’entends celles qui n’ont pas d’autre objet que la musique elle même, qui ne s’attachent qu’à des critères sonores. Elles ne renvoient à aucune représentation autre que musicale, à aucune métaphore. En regard de cette catégorie, mes œuvres figuratives, ce sont celles qui illustrent un thème qui peut-être poétique, philosophique, psychanalytique, etc., bref, qui font allusion à des concepts extra musicaux ou s’en inspirent. Les termes abstraction et figuration font évidemment référence à ceux en usage pour les arts plastiques où l’on rencontre souvent ce type d’opposition entre la représentation du modèle et sa disparition.
Certes, ces deux catégories existent dans ma production, mais elles se sont révélées d’importance très inégale; en effet l’examen de l’ensemble de mes œuvres confirme la prépondérance écrasante d’œuvres issues de divers concepts ainsi que d’œuvres à thèmes, comportant ou non des textes. Si je me réfère à mon parcours musical, ce constat répond fidèlement à la logique du projet que j’avais — j’en parlerai plus tard — lorsque dans les années 60 j’ai abandonné la composition instrumentale pour m’engager dans une autre direction musicale. À cette époque, en effet, je me suis éloigné de l’écriture traditionnelle et suis retourné vers la composition électroacoustique que j’avais découverte intuitivement à la fin des années 1940 grâce à l’enregistrement sonore — expérience fortuite, faite avec un primitif magnétophone “à fil” Webster, mais décisive pour mon engagement de compositeur. Dès ces premiers essais, je crois avoir pressenti la profusion de matériel et la liberté qu’offre le « son fixé » (Chion) et la modalité acousmatique (Bayle) qui ouvre un champ créatif presque infini; plus tard, ils m’ont permis, beaucoup plus fidèlement que les instruments, de traduire les thématiques qui me préoccupaient.
Cette écriture acousmatique, au long des années de pratique, s’est imposée à moi comme une méthode électroacoustique réunissant les qualités d’un classicisme — qu’il ne faut en aucun cas confondre avec académisme — c’est à dire de la maturité. Car, pour que l’innovation soit crédible et cohérente, elle ne doit pas être éphémère, jetable, elle doit bénéficier d’une certaine durée de vie, être explorée et constituer un acquis supplémentaire de la pensée. C’est ce qui s’est passé pour la musique tout au long de son histoire et, plus récemment, pour les découvertes, tant conceptuelles que techniques, qui ont permis l’apparition et la persistance des musiques électroacoustiques. La somme de ces découvertes génératrices d’idées nouvelles — les plus récentes prolongeant les anciennes sans pour autant les désavouer — et d’outils de plus en plus adaptés à une véritable écriture du son, n’a-t-elle pas alors favorisé le retour d’une stabilité, d’un équilibre, autrement dit : d’un classicisme ? « Vous savez, les classiques sont juste des modernes qui durent… », dit François Florent à propos du théâtre.
La cohérence de cette écriture m’a donc permis de développer les aspects thématiques et narratifs d’une « musique impure » aussi bien que des discours musicaux résolument abstraits. Je peux ainsi satisfaire mon intérêt pour d’autres formes artistiques qui ont toujours beaucoup compté pour moi. En associant des concepts à l’énoncé musical, en introduisant parfois des textes et de la matière vocale et en adoptant volontiers l’approche formelle du plasticien, du cinéaste ou du psychanalyste, il me semble ouvrir ma pensée sur une perspective plus vaste et plus complexe. Car l’invention durable n’est jamais dans la nouveauté technologique mais dans ce qu’on fait dire à n’importe quelle technologie.
Francis Dhomont a été l’élève de Ginette Waldmeier, Charles Koechlin et Nadia Boulanger. Vers la fin des années 40, à Paris (France), il découvre intuitivement, grâce au fil magnétique, ce que Pierre Schaeffer nommera la « musique concrète » et expérimente en solitaire les possibilités musicales de l’enregistrement sonore. Plus tard, abandonnant l’écriture instrumentale, il se consacrera exclusivement à la composition électroacoustique. Ardent exégète de la modalité acousmatique, son œuvre est, depuis 1963, exclusivement constitué de pièces sur support qui témoignent d’un intérêt constant pour une écriture morphologique et pour des ambiguïtés entre le son et l’image qu’il peut susciter.
Le Conseil des arts et des lettres du Québec lui a attribué une de ses prestigieuses bourses de carrière. En 1999, il obtenait cinq premiers prix pour quatre de ses œuvres dans des concours internationaux (Brésil, Espagne, Italie, Hongrie et République Tchèque). En 1997, récipiendaire du Prix Victor-Martyn-Lynch-Staunton du Conseil des arts du Canada, il était également l’invité du DAAD à Berlin (Allemagne). Cinq fois couronné par le Concours international de musique électroacoustique de Bourges (France) — notamment Prix du Magisterium en 1988 — et 2e prix au Prix Ars Electronica 1992 (Linz, Autriche), il a reçu pour ses œuvres de nombreuses distinctions.
Il a assuré la direction de numéros spéciaux aux éditions Musiques & Recherches (Belgique) et de Électroacoustique Québec: l’essor — pour la revue Circuit (Montréal). Coresponsable musical du Dictionnaire des arts médiatiques (publié par l’UQAM), il est également conférencier et a réalisé plusieurs émissions pour Radio-Canada et Radio-France.
De 1978 à 2005, il partage ses activités entre la France et le Québec où il a enseigné à l’Université de Montréal de 1980 à 1996. Il réside depuis l’automne 2004 en Avignon (France) et interprète fréquemment ses œuvres en France et à l’étranger. Grand voyageur, il siège sur de nombreux jurys. Compositeur agréé du Centre de musique canadienne (CMC, 1989), il est l’un des membres fondateurs (1986) de la Communauté électroacoustique canadienne (CEC) dont il est devenu membre honoraire en 1989. En octobre 2007, l’Université de Montréal lui décerne un doctorat honoris causa. Il est président du collectif Les Acousmonautes (Marseille, France) et Ehrenpatron de l’organisme Klang Projekte Weimar (Allemagne). Il se consacre aujourd’hui à la composition et à la réflexion théorique.
15:30–16:30 • Session #2: Tools!
Chair: James Harley
The paper gives an insight and overview into the promising new field of using human emotional responses — by the means of psychophysiology, biofeedback and affective computing — in the fields of music composition and music performance. The described techniques give rise to a vast number of new and unexplored possibilities to create in a radically new way personalized interactive musical compositions or performances where the emotional reactions of listeners are used as input.
The first part of the paper contains a characterization of the concept of interactivity in the arts as well as a concise overview of its historical background. With this overview an emphasis is put to the field of music. The theoretical framework developed by the Belgian pioneering multimedia artist P. Beyls will serve here as starting point. The second part of the paper starts by giving an overview of the fields of psychophysiology, biofeedback and affective computing and how they can be used in the practice of music composition and music performance. This overview is followed by a thorough discussion of how these fields offer a complete new perspective and approach for several classical musical concepts such as the concept of musical score and the classical theory of musical affects.
The third part of the paper deals with two of the most widely used existing methods of integrating direct human emotional response into performance art, indicated by the terms sonification and interpretation. In this first approach biometrical data is transformed into musically or sonically meaningful data, such as MIDI data. This data is subsequently integrated into a musical composition or performance. In the second approach, indicated by the term interpretation, a meaningful mapping with interpretation of biometrical data is elaborated and used. This mapping will hereby try to interpret the human emotional responses.
The fourth section of the paper describes a selection through history of some installations and projects where the use of biofeedback, psychophysiology and/or affective computing plays a central role. These include Alvin Lucier’s Music for Solo Performer, Richard Teitelbaum’s On Being Invisible, Biomuse Trio (Eric Lyon, R. Benjamin Knapp and Gascia Ouzounian) and the author’s EMO-Synth Project.
The fifth and final section of the paper deals with fundamental questions and paradigms that arise from the techniques and concepts described in the paper. Focus will here be put on the how a composer or performer can extend or complement his or her own creative process by using and integrating human emotions quantitatively as well as how the use of direct emotional reactions changes boundaries between artist, audience and the artwork. Also questions related to authorship and artistic authenticity will be discussed thoroughly in this concluding section.
Dr. Valery Vermeulen is a Belgian multimedia artist, electronic musician, mathematician and guest lecturer at various Belgian art institutes. In 2001 he obtained a PhD in pure mathematics at the University of Ghent. Between 2001 and 2005 Vermeulen worked at the Institute for Psychoacoustics and Electronic Music at Ghent University on a research project focusing on the link between music and emotions. Meanwhile he started writing and recording music in his own production studio. Since 2004 Vermeulen has been working on various interactive multimedia projects where the man-machine interaction plays a central role. His installations and performances have been widely shown in Belgium as well as abroad and cover various topics including affective computing, generative art, biofeedback and artificial intelligence. In addition to his activities as digital artist, Vermeulen works as a consultant in statistics and recently finished his studies as music producer at the Royal Conservatory of Ghent.
In this paper I present the application MyPic, a tool for Computer-Assisted Algorithmic Composition (CAAC). The primary aim of the software is to provide the composer with an interface for describing and orchestrating musical events via envelopes and auxiliary data for the purpose of driving algorithmic processes. The development of MyPic was mainly inspired by my own CAAC experiments and a lack of available software to create, organize, orchestrate and manipulate the data that would drive events/algorithms in my own CAAC. The application’s features and functionality, as well as some example applications of MyPic, are presented here.
The primary design goals for the development of MyPic were:
- To create an interface for describing/ orchestrating musical events through the use of envelopes (modelled after UPIC) that could export data for use in CAAC;
- To have various tools for drawing and manipulating the envelopes that represent musical events;
- To expand on the UPIC model by having an Event in MyPic consist of a bundle of data to better represent musical events;
- To have MyPic be a universal tool, able to be used in the composition of any CAAC work.
Daniel Swilley (1980) is a German-American composer of acoustic and electroacoustic music based in Champaign IL. He is a graduate of Valdosta State University (BM) and Georgia State University (MM), and is currently a Doctoral Candidate (ABD) in Music Composition at the University of Illinois Urbana-Champaign. While at UIUC (2007–11), Swilley served as the Operations Assistant for the Experimental Music Studios. Since 2011, he has taught courses in composition, electroacoustic music and music theory as an Adjunct Instructor of Music at Illinois Wesleyan University. Swilley’s composition teachers have included Tayloe Harding, Mei-Fang Lin, Heinrich Taube, Stephen Taylor, Sever Tipei, Robert Scott Thompson and Scott Wyatt.
Day 3 — Friday 16 August
Click here to return to the symposium schedule.
09:30–11:00 • Session #3: Form and Function in EA
Chair: Emilie LeBel
This paper reflects on some of the challenges composers and performers may encounter when their creative work is based on quickly evolving electroacoustic technologies, rather than on the comfortable predictability of traditional acoustic instrumentation and related performance practices.
The topic may evoke images of frustrated artists, attempting to master a meta-instrument in permanent transition or confronting constant technical glitches — and those images are certainly part of the overall picture. But we must also consider how these challenges can constructively transform artists” relationship with their tools and resources, and potentially strengthen their creative work.
We review the situation from three perspectives: studio-based electroacoustic composers; laptop performers; and instrumentalists who incorporate either interactive or passive electroacoustic technologies in their performances. In addition to the author’s own reflections, we also include insights contributed by other practicing artists working in those areas.
The goal is to broadly consider some of the ways the transience of our electroacoustic tools can fundamentally influence both the work we produce and our core creative processes.
Steven Naylor composes electroacoustic and instrumental concert music, performs (piano, electronics, seljefløyte) in ensembles concerned with both through-composition and improvisation, and creates scores and sound designs for theatre, film, television and radio. His concert works have been performed and broadcast internationally; his theatre scores have now played to live audiences of over five million, in 13 countries. Steven co-founded Upstream Music and the Oscillations Festival, and is a former President of the Canadian Electroacoustic Community (CEC). He is presently Artistic Director of subText and an Adjunct-Professor in the School of Music at Acadia University. His first solo DVD-A of electroacoustic works, Lieux imaginaires, released on empreintes DIGITALes, was nominated for a 2013 East Coast Music Award. Steven completed the PhD in Musical Composition at the University of Birmingham, UK, supervised by Jonty Harrison. He lives in Halifax NS, Canada.
This paper investigates the concept of “translation” in music. The misuse of the term — whether being a part of the “music as language” conjecture or interchangeably with other “trans” words — underscores a demand for clarity. While I propose that translation is a specialized, ontological branch of transformation, I believe that, relative to the musical practice, there are multiple modes of translational production due to the temporal nature of medium. The first section of the paper observes problems inherent to identifying the composer as a translator, a language creator, both, or neither. Divergences in locating a “true” qualifier for the composer lead to a brief discussion on translation in the contexts of linguistics and mathematics. My goal is to clearly delineate differences in the use of the term translation and provide a workable definition within the bounds of music composition. In order to better understand this, I enlist the insights of Ashby, Bateson, Shannon and others for a cybernetic explanation of the operation. Finally, I question the need for analysing music relative to its “context” by applying principle theories of Badiou, Derrida and Žižek as they pertain to the process of translation. The second section highlights compositional sketches derived from my translational studies, some of which are realized in my fixed media composition OSCines (2013). I discuss pre-compositional problems concerning the abstraction of sonic source materials and the projection to sonic target materials, as well as translational issues that emerge as a result of my compositional “choice”.
Benjamin O’Brien composes and performs acoustic and electro‐acoustic music. He is currently pursuing a PhD in Music Composition at the University of Florida. He holds a MA in Music Composition from Mills College and a BA in Mathematics from the University of Virginia. Benjamin has studied composition, theory and performance with John Bischoff, Ted Coffey, Fred Frith, Paul Koonce, Roscoe Mitchell and Paul Richards. His compositions have been performed at national and international conferences and festivals including ICMC, EMS, NYCEMF, SCI National Conference and SuperCollider Symposium. He received the Elizabeth Mills Crothers Award for Outstanding Musical Composition and is a WOCMAT International Electroacoustic Music Young Composers Awards Finalist. His compositions have been published by SEAMUS and Taukay Edizioni Musicali. He performs regularly with the international laptop quartet Glitch Lich.
In recent years, in both the instrumental New Music and electroacoustic milieux, there seems to have been an increasing interest in miniatures, works having a duration of less than a specific threshold. There is, however, a surprising shortfall of reflection and discussion on the nature of the miniature or the “work of limited duration.” With a certain number of projects promoting the form now having been realized and a large number of works available which purport to be in miniature form, we are now able to make an assessment of the nature of this particular musical form by asking such questions as: Why it is relevant or interesting to compose in this form(at)? More fundamentally, what in fact constitutes a “miniature”? The few discussions that do address the differences between miniature and non-miniature form typically concern duration, with no conclusive explanation of why the particular duration (varyingly 60 or 90 seconds, 3 or 5 minutes) was found to be an appropriate threshold to establish a piece as a miniature (and therefore make it eligible for a particular project).
Many works of limited duration might more aptly be deemed short works rather than miniatures, as they display indisputable correspondences to one or another form type already established by larger scale compositions. Using the term “miniature” to refer to a specific piece is a qualitatively different declaration than referring to it as “a short work,” or even “work of limited duration.” Accordingly, for a musical work to be considered a “miniature” — as opposed to “a short composition,” a “pop format” or “radio format” work, for example — it will need to conform to certain basic criteria, independent of the work’s duration. Here, it is proposed that a work is only to be considered a miniature when it engages the listener in a particular quality of listening or artistic experience — known or unbeknownst to its creator — which is fundamentally different than that typically encountered in larger scale works.
Several form types encountered on a recurrent basis in works of limited duration are analysed and proposed as models to be used as the basis for judging whether or not a work is in “miniature form.” An overview of projects and competitions from the past two decades which invited submissions of “miniatures,” “electro-clips” and “works of short duration” is accompanied by a comparison of their submission and eligibility criteria, in order to articulate parallels and differences in regards to their understanding of miniature form.
It is argued that duration, while undoubtedly a critical determining factor, is not the essential characteristic which determines a work as a “miniature,” as is the common view in personal and listserv discussions and calls for submissions to “miniature” projects. A work is to be considered as having “miniature form” only when it can in fact be understood as problematizing form on some level and offering some perspective on the actual nature of miniature form.
Canadian composer jef chippewa is particularly interested in questions of cultural awareness and identity in regards to the composer’s responsibility in inheriting or appropriating cultural heritage. Understanding the impossibility of deﬁnitive articulation or comprehension of cultural identity does not justify conscious ignorance of any of its aspects. Nor does it excuse irresponsibility in cultural appropriation, and this applies equally to the appropriation of one’s “own” culture (cultural heritage) as to that of another culture or sub-culture (“external inﬂuences”). He recently completed 17 miniatures (2012), for flutes, extended piano, drumset and several dozen sound-producing objects, commissioned by Berlin-based Trio Nexus, and “… unless he senses when to jump” (2012), commissioned by Berlin’s LUX:NM ensemble with the support of the Canada Council for the Arts. Since 2005, jef chippewa is the Administrative co-Director of the Canadian Electroacoustic Community (CEC), Canada’s national association for electroacoustic music, as well as Coordinating Editor for the CEC’s quarterly journal for electroacoustics, eContact!
11:15–12:30 • Session #4: Creative Practices 2 — Data Manipulation and Generation
Chair: Adam Tindale
The subject of this lecture-recital will be the processes behind the composition of the author’s recently completed project Hadronized Spectra (the LHC Sonifications). As the name implies, this project was inspired and driven by the work being done at the Large Hadron Collider at Cern. It involves parameter-mapping sonification of numerical data derived from proton collision events. The data was collected by the Atlas detector, which is one of several at the LHC. Five compositions were realized for this project, all using various permutations of the same data. Two of these compositions will be presented in their entirety in addition to excerpts from the others. We will also touch upon ideas of emergence, sonification, immersion, a novel approach to spatialization and the basics of how the LHC works. However, the primary focus will be upon listening.
Composing from a contemporary musique concrète perspective augmented by various score synthesis techniques, the author elicits musical events from generative algorithms and an ever-expanding Csound sample playback instrument. Numerical representations of aural quanta are mixed and blended into formal elements via a variety of catalysts such as tendency masks, mathematical equations, sonifications, cellular automata, score based sampling and other paradigms in an unbending quest for emergent quanta.
Michael Rhoades is honoured to have served as a SEAMUS board member and hosted SEAMUS 2009. He curated the monthly Sweetwater Electroacoustic Music Concert Series and numerous other concerts, exhibits and installations. His works have been presented in concert worldwide as well as used for pedagogical purposes. He is a published writer and also gives lectures on the subjects of algorithmic composition, score based sampling, sonification, spatialization and creativity. He recently published his 19th CD/DVD titled Horizontal Blue (Cycles within a Cycle). Michael is also a prolific oil painter and visual artist.
The use of data obtained from spectral analyses in generating musical material is a well-established compositional practice, explored by composers seeking to draw connections between acoustic phenomena and musical structures. Although many software environments are capable of performing spectral analyses of source sounds, the data that they produce is typically complex and extensive and therefore often unwieldy for use during the compositional process. This paper presents a set of tools for the pre-compositional investigation and manipulation of spectral data, developed as part of the author’s doctoral dissertation.
In order to improve a composer’s ability to explore, manipulate and evaluate spectral analysis data, the author has developed a set of tools (patches) using the graphical programming language OpenMusic (OM). A basic set of patches allows the user to investigate analysis information, parsing and filtering data according to partial strength, register and temporal position, while receiving feedback for evaluation both visually in musical notation and aurally through re-synthesis. This initial stage allows the composer to obtain suitable raw materials while maintaining a generic data structure, thereby facilitating the flexible application of this information in a wide variety of subsequent processes.
Once the primary spectral information has been extracted, pitch and amplitude data may be imported into specialized patches for manipulation in idiosyncratic ways, to obtain material that meets the creative needs of a particular piece. This paper demonstrates two methods of expressing spectral information that have been employed in the author’s compositions: the first extends a spectral profile to the characteristics of a stochastic texture, as found in Elucide for chamber orchestra; in the second approach, spectra function as progenitors within a genetically-modelled algorithm, used to create an evolution of harmonic structures in Conduits for clarinet and electronics.
The author has found that by combining a generic approach to spectral data selection with specialized patches for manipulation, the pre-compositional process can be intuitive, flexible and readily integrated within traditional compositional practices. It is the aim of this paper to suggest to other composers how similar methods may be incorporated into their own creative processes.
David Litke holds degrees in composition from the University of Toronto and the University of British Columbia, having completed doctoral studies at the latter under the supervision of Dr. Keith Hamel. After completing graduate studies in 2008, he taught courses in electroacoustic music and music theory at UBC, and taught theory and aural skills at the University of Windsor in 2012–13. His music has been performed by many of Canada’s finest musicians, including the National Broadcast Orchestra, l’Ensemble Contemporain de Montréal and the Turning Point Ensemble. His work has been recognized nationally and internationally, in composition competitions (NBO, SOCAN and CUMS competitions) as well as in emerging composers’ programs (ECM’s Génération 2006, NAC Young Composers 2008, acanthes@ircam New Technologies 2012, Composit 2013). He has also been active in electroacoustic music research and has given presentations at ICMC and SMC conferences on the gestural control of electroacoustic music as well as computer score following.
15:30–16:30 • Session #5: EA Outside the Concert Hall
Chair: Amanda Lowry
An historical overview of Montréal-based electroacoustic music label empreintes DIGITALes from its foundation in 1990 until today: the challenges of access it was addressing, the content it is focusing on, the forms it has selected, the music publishing scheme it borrowed, the outreach means it has used. Over the same time period, schematizing the changes in the cultural and social landscapes caused by the technological changes in the means of communication. In view of these, questioning the relevance of a “label” and its roles in curating, presenting (including packaging) and outreaching by further identifying the needs and means of 21st-century listeners, this involving several conference attendants in an open discussion. Finally positioning the on-demand streaming service electro:thèque vis-à-vis the information-rich online shop <electrocd.com> and comparing its model to other similar services.
Jean-François Denis first discovered electroacoustics during a summer course given by Kevin Austin in 1981 at Concordia University in Montréal (Québec). Hooked, he pursued music studies at Mills College in Oakland (USA) under David Rosenboom (MFA, 1984). He has worked in live electroacoustics (solo and in ensembles) and created works for dance and multimedia until the mid-90s. He is the Artistic Director of the empreintes DIGITALes (1990) record label. In 1994, for his exceptional commitment to Canadian composers, he was awarded the Friends of Canadian Music Award organized by the CMC and the CLC. In 1995 he was presented the Jean A. Chalmers Award for Musical Composition — Presenters’ Award for his contribution to the diffusion of new Canadian (electroacoustic music) work. In 2011, he was awarded the Prix Opus 2009–10 for Artistic Director of the Year in recognition of his 20-year involvement in sound recording publishing / production.
MoneySquall will attempt to answer a simple question: how have sales affected the form of electroacoustic music? This paper will take a rigorous look at the patterns that have emerged in the past 40 years of record sales of this genre of music, as record labels such as Smalltown Supersound, Warp, and Kranky have emerged to organize, popularize and commodify the genre. Additionally, how has the emergence of file sharing (and, more recently, streaming) affected the proliferation and consumption of electroacoustic music? From these figures we will be led to ask: has the form of electroacoustic music changed to accommodate this commodification? Have we started to privilege (critically) certain forms due to their popular success? Has a resistance to this commodification emerged? How have the boundaries of what can be called electroacoustic music changed as a result of this commodification?
Using the shipment database of the Recording Industry Association of America (RIAA), this paper will take a detailed look at both short- and long-term patterns of electroacoustic record sales. The RIAA has been collecting data since 1973 (and the data is current as of the end of 2012), so early pioneers of the genre will not be included in this paper.
One of the major issues with tackling a topic such as this is the problem of inclusion / exclusion; as in, where does one draw the boundaries of “electroacoustic music”. Given the complexities of promotion / distribution and the massive effect record labels can have on sales, this paper will attempt to make its comparisons with respect to release method (e.g., independent label, self-release, digital release, etc.). Similarly, with such a huge percentage of electroacoustic and experimental music being released on an extremely small scale, this paper will primarily look at names that are generally accepted to be figureheads of a particular genre or subgenre (which will admittedly focus on releases with relatively larger sales figures). While this methodology is by no means perfect, it will allow for stricter control groups and thus more easily explained and understood trends in sales.
The corresponding analysis of form will similarly focus on easily quantifiable aspects of electroacoustic music, as in track lengths, album lengths, repeated sections, number of sections, etc. Musically, these are very straightforward qualities, which is by design, in an attempt to find simple, macro trends in form over time.
Once these two methods of analysis are superimposed, this paper will finally begin looking at patterns and drawing conclusions, all of which will add insight to the questions posed above.
Matthew Griffin is a musician and composer from Kitchener, Ontario, Canada now living and working in Toronto. He holds a BFA from Simon Fraser University’s School for the Contemporary Arts and an MFA from The School of the Art Institute of Chicago. He is the co-founder and Artistic Director of Electricity is Magic, an arts organization and alternative gallery space based in Toronto. His recent exhibitions include Latent Players, a two-person show with painter Jenal Dolson at Concord Gallery in Toronto, his solo exhibition Hearing Every Rhyme at Webb School Gallery in Knoxville, Tennessee, and his collaboration with Eric Powell and Campbell Foster as Kata-Stroph for Toronto’s Nuit Blanche. His upcoming solo album Shake Kids in the Sunset Hills will be released by Electricity is Magic in 2013.
Day 4 — Saturday 17 August
Click here to return to the symposium schedule.
09:30–11:00 • Session #6: Networked and Open Source Creative Practices
Chair: Matthew Griffin
For the Toronto Electroacoustic Symposium 2013, I would like to propose a paper presenting reflections of a historical character on the phenomena of laptop orchestras, and within that, live coding laptop orchestras in particular. The activity of the numerous laptop orchestras and live coding ensembles that have arisen over the course of the past decade is generally theorized as either an extension of electronic instrument design (New Interfaces for Musical Expression) or of research into the possibilities of existing and emerging programming languages (live coding). I propose to review and resituate the emerging laptop orchestra and live coding “traditions” in response to some of the long-standing “ear-centric” traditions of thought about sound — principally Adorno and Cage, Schaeffer and Bayle, and acoustic ecology / soundscape composition.
Since 2010, I have been the director of the Cybernetic Orchestra at McMaster University, a participatory laptop orchestra (open to the entire university community and including participants of very diverse backgrounds and interests) that has released two full-length albums (esp.beat and Shift, both available via soundcloud.com/cyberneticOrchestra). My role in this ensemble has shown me numerous examples of the double relationship that the laptop orchestra phenomenon bears to broader sound art traditions: the laptop orchestra is at once a gateway to long-standing sound art traditions and also a critique of them. The participant in a laptop orchestra develops and refines the aural discrimination required for an appreciation of the nuances of ear-centred art forms, but by virtue of the same activity opens pathways that lead into multimedia and non-sensory art and design practices, code culture and the digital humanities.
The main purpose of this review is the orientation (or reorientation) of laptop orchestra and live coding research in and around my community of practice during the coming years. We have reached a point where laptop orchestras — be they liked or loathed, be they greeted with enthusiasm or scepticism — are relatively established features of the electroacoustic landscape, with strong representation within institutions of higher education and early signs of growth and acceptance in less institutionalized contexts. I would like to believe that the next phase of our work will be characterized by less attention to technical details (as questions of how to make sound x, or how to measure interaction y become an aspect of community permaculture and wider audience awareness) and by increased attention to the relationship of this work to bigger questions, including: What are the challenges of the electroacoustic community and to what extent do laptop orchestras help meet these challenges? What are the broader social benefits of participatory electroacoustic ensembles? To what extent are we opening the message in a bottle, uncorking the fate Adorno had identified for modern music? And to the extent that we are opening the message in a bottle: do new potentials for the artistic traditions of the 20th century thus emerge?
David Ogborn is a creator, performer and producer working at the front lines of experimental music, sound art and code culture. Highlights have included Metropolis (2007, live electronics with silent film), Opera on the Rocks (2008, opera with live electronics), Emergence (2009, live electronics + physical computing) and Waterfall (2010, collaborative video sculpture at Summer Olympics). Ogborn is a member of the live coding trio extramuros and the director of the Cybernetic Orchestra at McMaster University, where he teaches a broad range of courses in audio and computational media. He was the founding chair of the Toronto Electroacoustic Symposium and served as the President of the Canadian Electroacoustic Community (CEC) from 2008 to 2013.
Communicating with OSC via UDP protocols over a wireless network poses well known problems for the performance of networked computer music: dropped packets and timing inconsistencies create problems for composers who need a certain degree of reliability for their pieces to work. Based on experience with the laptop ensembles PLOrk and Sideband, this paper summarizes these problems and presents a networking utility named LANdini as an example of a possible solution. LANdini takes sending and receiving algorithms from a network-based collaborative music environment named LNX Studio, by Neil Cosgrove, as a starting point, and uses these to create three types of sending protocol: normal OSC sending, “guaranteed delivery” and “ordered guaranteed delivery”. In addition, LANdini features a “stage map”, whereby ensembles can use spatial arrangement information in pieces more easily, and a robust implementation of shared network time. The motivation and implementation of these features is discussed at length. Initial test results and concert use shows encouraging results and plans for future improvements are presented.
Jascha Narveson is a doctoral candidate in music composition at Princeton University. He has an MA from Wesleyan and a BMus from Wilfrid Laurier University, also in music composition. He’s an active composer, as well as a regular performer with Sideband, a professional offshoot of the Princeton Laptop Orchestra.
quince is a program for editing time-based data on the Mac. Although quince was developed to serve musical purposes, theoretically, not only audio but also video and any other time-based data type can be edited in quince. Generally speaking, the main application for quince is the creation and editing of sequences of events in time.
quince was developed from a point of view that assumes that software developed for artistic tasks is always insufficient. There are always some features missing and it can almost never do exactly what one needs. Standard DAWs and sequencers are stable, powerful and convenient, but they are also inflexible, in most cases not extensible and for many tasks which are a bit off the main road, they are simply useless.
The biggest difference between quince and the standard DAW is that quince does not operate in the “tape recorder vs. mixing desk” paradigm. There are no channels in which audio data is arranged and since there are no channels, there also is no need for a mixer to mix them. Instead, quince presents its contents in display strips, which can contain arbitrary numbers of layers of data.
Another big difference is that quince is not limited to the processing of audio data. quince is operating on an abstract layer where it does not matter whether an event represents a video clip, an audio file, or whether the event should trigger the execution of a shell script. The functionality is dependent only on the existence of an appropriate plug-in.
quince is based on a core, which handles basic operational tasks such as file and object management, but which does not implement functionality that could actually be used to work on sequences of events. The core does however provide an interface for plug-ins. Almost all of the functionality of quince is implemented in plug-ins, display plug-ins, function plug-ins and player plug-ins. quince is designed to be extensible, the interface to the core (the Quince API) being very flexible and the amount of code that needs to be written to add a new plug-in is minimal. quince may be an option for all those who have their own ideas about how to manipulate their data, who are not satisfied with standard solutions and for those who are able to write a few lines of code to implement custom functions on their own.
Maximilian Marcoll (1981) lives in Berlin, teaches in Berlin and Düsseldorf, and studied percussion, instrumental and electronic composition in Lübeck and Essen, Germany. In his Compounds series (since 2008) he focuses on the transcription of concrete sounds, mostly recorded in live, everyday situations, and the creation of a material network, on which the pieces of the series are based. The development of software is a part of his compositional activities. In 2010, the software quince was released. Marcoll is a member of the artist group stock11.
11:15–12:30 • Session #7: Live and Non-Realtime Approaches to Presenting EA Projects
Chair: Adam Vidiksis
CrossTalk is an electroacoustic performance in which the audience and two singers interact with each other through a single augmented instrument, developed in Pd. The instrument analyses the performers’ incoming signals and maps them to signal-processing parameter controls in real time. An extension of using music information retrieval technology to control actions and processing, CrossTalk maps the control data to the opposite performer and therefore each singer is at once both a performer and controller of the other’s signal. This multidimensional interaction between the performers and audience invites all parties to question who is the performer, what is the instrument and who is in control?
Michael Palumbo is a computer musician working towards a BFA in Electroacoustic Studies at Concordia University. Crosstalk premiered in 2013 at the University of Toronto, followed by a roundtable discussion. It premiered in Montréal later that year at Eastern Bloc. His other works include Soup Phase, a soundscape composition performed at Stazione di Topolo’s annual international telematic concert ToBe Continued in 2013, and Music for 22 Email Machines, which was performed by the Concordia Laptop Orchestra in 2013 in Montreal. His paper “Using an Augmented Electric Guitar as Both a Controller and as a Polyphonic Instrument,” other compositions and his electroacoustic and sound studies podcast can be found on his website. Michael attended the 2012 Toronto Electroacoustic Symposium and the 2012 New Interfaces for Musical Expression conference in Ann Arbor MI (USA). His areas of interest include the augmentation of musical instruments and networked music performance.
Source separation algorithms for audio, which attempt the extraction of constituent streams within a complex sound scene, have many compositional applications in sound transformation. Nascent polyphonic transcription technology is commercially available (e.g., AudioScore Ultimate 7) and can sometimes act to resynthesize after alterations in the spectral domain (e.g., Melodyne’s Direct Note Access), though a full re-synthesized separation of all timbral parts or sound sources in a complex recording remains open research. Nonetheless, there are source separation algorithms becoming available to composers in some toolkits and plug-ins, and this presentation will survey some options. In particular, two plug-ins embodying live implementations of source separation for SuperCollider will be discussed and demonstrated.
SourceSeparation uses non-negative matrix factorisation (NMF) to break a spectral view of audio down into component parts, creating masks for resynthesis of isolated parts (Lee and Seung 2001, Wang and Plumbley 2005). It must necessarily run in two stages; an offline (background thread) analysis module and a live resynthesis module based on (time-varying) masks created by NMF, which can act on novel arriving material once the masks have been prepared.
MedianSeparation allows the separation of percussive and tonal parts of a signal. The algorithm is based on the recognition that sustained horizontal trails in the spectrum correspond to tonals and vertical lines are transient percussive events; at each time-frequency cell, the largest of a horizontal and vertical median denotes a tonal or percussive assignment respectively (FitzGerald 2010). The live implementation is relatively efficient (20% of CPU for typical settings on an older MacBook Pro) and has been extended from the original FitzGerald paper to incorporate pragmatic options such as mean rather than median filtering (at much reduced CPU cost) and a choice of median size, including very short sizes for low latency application.
Compositional experiments will be outlined, including offline recursive median separation through different parameters settings for window size, median size and percussive / tonal parts.
Nick Collins is currently Lecturer in Music Informatics at the University of Sussex and will become Reader in Composition at Durham University in September 2013. His research interests include live computer music, musical artificial intelligence and computational musicology, and he is a frequent international performer as composer-programmer-pianist, from algoraves to electronic chamber music. His latest book, co-written with Margaret Schedel and Scott Wilson, is the volume on Electronic Music in the Cambridge University Press Introductions series. Further details, including publications, music, code and more, are available online, somewhere.
14:00–15:00 • Session #8: Interdisciplinary Practices and Waveform Modulation
Chair: jef chippewa
How do technologies of representation communicate presence within the context of performance and how does presence influence the construction of meaning within inter-sensory compositional discourse? Fear of Flight, a one-hour multimedia production, integrates abstract audiovisual narrative with a live dancer’s floor and aerial performance. It considers the sensory experience of the audience, who must adapt to their own physical situation in the surround projection environment. Real-time video capture is employed to synchronize aspects of the dancer’s movement in the air with the audiovisual composition during the performance. Part of the æsthetic and technical challenge of the piece lies in the seamless integration of high-resolution, pre-rendered, audiovisual material with audiovisual material that is generated dynamically within the moment of performance. Compositionally, Fear of Flight demonstrates multiple ways in which the production of affect between media can evolve successively within a compositional structure to achieve strong æsthetic results.
Freida Abtan is a Canadian multi-disciplinary artist and composer. Her music falls somewhere in between musique concrète and more modern noise and experimental audio and both genres are influential to her sound. Her work has been compared to bands such as Coil and Zoviet France because of her use of spectral manipulation and collage. Her research interests revolve around inter-sensory composition. She works with fixed and reactive audiovisual media for concert diffusion, installation and large-scale multimedia production, as well as computer vision techniques and sensor-based technologies. Her work has been presented internationally in festivals such as the International Computer Music Festival (2009–12), The Mutek Festival of Electronic Music and Digital Arts (2006, 2008, 2010), The Cap Sembrat Festival (2008) and The Spark Festival of Electronic Music and Arts (2008). She has performed at venues such as WORM Rotterdam (2012) and the Great American Music Hall in San Francisco (2009).
Oscillators are a key concept in synthesis being manifested in both hardware and software synthesis. A typical oscillator provides controls over the frequency of oscillation and instantaneous phase of the process, in addition to a selection of the waveform to be oscillated. In classic modulation synthesis, the frequency, phase or amplitude of oscillators is modulated; however the other parameter — the selection of the waveform — is traditionally left untouched. The paper introduces the idea of an oscillating oscillator, or more simply, waveform modulation.
The paper will present a brief discussion of oscillators and their implementations in digital and analogue systems. Implementations in common music programming languages will be presented and discussed, including Pure Data, Max/MSP and ChucK. Finally, an analysis of the resulting overtone series for the alternating waveforms is presented along with accompanying sound examples.
In digital implementations of oscillators, the frequency parameter is generally expressed in Hertz. In order to optimize the computational load and resources required by the oscillator, the samples of the waveform are stored in a wavetable and accessed while playing. In a waveform modulation oscillator utilizing this technique, more than one period of a waveform is stored in the wavetable, thus requiring a small intervention in order to retain the semantic of the frequency control of the oscillator. For n number of modulations in the period, the frequency control parameter must be divided by n in order to retain the semantic meaning.
Adam Tindale is an electronic drummer and digital instrument designer. He is an Associate-Professor of Human-Computer Interaction in the Digital Futures Initiative at Ontario College of Art and Design University. Adam performs on his E-Drumset, a new electronic instrument that utilizes physical modelling and machine learning with an intuitive physical interface. He completed a Bachelor of Music at Queen’s University, a Master of Music Technology at McGill University and an Interdisciplinary PhD in Music, Computer Science and Electrical Engineering at the University of Victoria.
15:15–16:45 • Session #9: EA Analysis
Chair: David Litke
The study of the activity of magnetic tape composition based on its “written” sources is an emerging field of research. Departing from the consideration of composition as a performative process (Benson, 2003; Straebel, 2009; García Karman, 2013a) this discipline has dealt with the analysis and creative reconstruction of magnetic tape works (L. Austin, 2004), the role of the editor and the demand for critical editions (Brech, 2007; Friedl, 2012) and the urge for new approaches — aware of the physicality of magnetic media — to the creation of digital archives (García Karman, Adkins and Duque, 2012). The study of the materiality of tape music and its compositional sources, the criticism of those materials and the determination of their meaning are the subjects of this new philology of magnetic media.
This presentation will be concerned with the study of the magnetic tape sketches in the Roberto Gerhard Tape Collection. The analysis of those materials provides a privileged framework to understand Roberto Gerhard’s (1886 –1970) technique of “sound composition” (García Karman, 2013b). The unfolding of the compositional process in time will reveal the details of Gerhard’s operations in the studio. In the course of the examination of the genesis of Gerhard’s sound montages, our discussion will deal with aspects such as borrowing and recycling as a creative method, the metamorphosis of components, or the roles of mixing and mastering. It will also be possible to discern the operations in the composer’s domestic studio from those made at the Radiophonic Workshop.
The investigation of Roberto Gerhard’s heterodox technique will be illustrated with examples of the author’s reconstruction of the Lament for the Death of a Bullfighter (1959), a pioneering radiophonic poem based upon Lorca’s famous elegy Llanto por Ignacio Sánchez Mejías whose “electronic setting” was entrusted to Gerhard by the BBC. Owing to Gerhard’s captivation by the sound of speech and the radiophonic nature of the piece, emphasis will be made on the relation between speech and sound composition. Each of the four sections of Lorca’s poem is treated differently by Gerhard in terms of sound-colour and patterning. The examination of the sound structures will lead to the consideration of the composer’s poetic operations, revealing the expressiveness and gestural symbolism conferred by the composer to recorded sound. The presentation will conclude with an overview of the practical difficulties involved in the reconstruction of early tape music based on magnetic tape sketches and the creative challenges posed by the concert adaptation of this radiophonic work.
Gregorio García Karman is a PhDc at the University of Huddersfield. In 2012 he was responsible for the creation of the digital archive of Roberto Gerhard’s Tape Collection in the context of the AHRC project, “The Electronic Music of Roberto Gerhard” as Research Assistant (University of Huddersfield) and Research Associate of the Centre for Music and Science (University of Cambridge). Until 2012 he was member of the Experimentalstudio of the SWR (Freiburg), performing in venues such as the Philharmonie (Berlin), Mozarteum (Salzburg), Arsenale (Venice), Nezahualcóyotl (Mexico DF) and Spiral Hall (Tokyo), and collaborating with composers such as Karlheinz Stockhausen, Georg Friedrich Haas or Julio Estrada. He has published on a range of topics including composition and performance practice, lecturing, among others, at the India International Centre (New Delhi), Southbank Centre, Stockhausen Summer Courses, ZKM and the Reina Sofía Museum. In 2004, he obtained his Master’s Diploma of Advanced Studies in Musicology, after studies in Piano, Music Theory and Sound Engineering.
The Standing Man was composed between 1995 and 1996 by French-Canadian composer Christian Calon (Marseille, 1950) during his stay in Berlin as a guest of the DAAD (Deutscher Akademischer Austauschdienst). It was presented in June 1996 in Berlin for the first time. The work is a 24-track composition based on a ballade by François Villon, which is intended to be performed on a 24-loudspeaker system, installed on four levels. The subtitle Adventures in Audio Reality, refers to the central topic of the work, i.e. the modifications of the sonic world. These modifications, or metamorphoses in Calon’s language, are experienced by a standing human and taken up by a camera. The work is a walkthrough sound installation within a loop of repeating music, the audience taking different positions in the three-dimensional performance space to experience different sound images accordingly. This paper aims to analyse and explore the composition in terms of its significant musical component, i.e. its spatialization in its narrative and acousmatic context.
Bijan Zelli was born in Teheran, Iran 1960. After completing his studies in electrical engineering at Sharif University of Technology, he immigrated to Sweden where he changed his career from engineering to music. He received his Master in Music Education in 1996 and his PhD in Musicology in 2001. He moved to the United States in 2007 and currently works as music educator and researcher in San Diego, California. Bijan Zelli has performed many music lectures in different countries and publishes academic papers in German, Farsi, Swedish and English. His most recent paper, “The Choreography of Noise: Analysis of Henry Gwiazda’s “buzzingreynold’sdreamland” was recently published in eContact!
Karlheinz Stockhausen, in his work Nr. 19 Solo für Melodieinstrument mit Rückkopplung (Solo for Melody Instrument with Feedback), sought a new conception of form, a “memory” form in which a feedback of musical ideas would interact in real time. The creation of the score itself follows an interactive process whereby the instrumentalist extracts fragments from Stockhausen’s pre-composed musical material and patches them together anew. A performance of Solo incorporates a variable length tape delay and feedback system that superimposes recorded material and plays it back live. It is this Strukturbildung (“structure formation” of electronic superimpositions) that will be the focus of analysis. Although Solo appears to be an open-form work, electronic superimpositions manifest structures which function at a macro-formal level, whereas content (and a number of other parameters) shape form at a micro-formal level. Thus, Solo has a definite fixed form: a structure of electronic superimpositions that Stockhausen systematically conceives and distributes across the six Versions of the work.
I will begin by examining and creating a nomenclature for electronic superimpositions that form patterns and manifest techniques that evolve across complete and partial cycles (sections). In an attempt to prove an overall structure of electronic form, I will present a topology of these patterns and techniques that demonstrates a systematic organization of elements.
Although musical analysis of a Version (or Versions) of Solo is by no means capable of providing an exhaustive understanding of form and content, it does yield insight into the multi-layered processes at play. Musical content affects form to varying degrees, ranging from negligible to significant; however, in no instances does musical content define form to the degree which electronic superimposition does. In fact, Stockhausen, in his choice of musical content, seems to select material that supports and complements the predetermined framework of electronic superimpositions. Thus, electronic superimpositions establish the foundations of structure and create form at a macro-level, while micro-formal elements carry out processes of subtraction and variation, shaping, but not undermining, the structural paradigm of superimpositions and imparting a uniqueness to Versions.
Stockhausen abandons the traditional exposition /development / recapitulation paradigm for a new conception of form, a “memory” form involving an interaction of acoustic and electronic feedback. Solo could be considered thematically non-developmental, but I contend that Stockhausen achieves a different type of development, a development through structure, texture and diffusion which amalgamates these traditional elements of form, thus creating a continuous, temporally displaced exposition / development / recapitulation structure. Stockhausen strove for, and achieved, “something new” in the composition of Solo; although his original intentions underwent a transformation in which the idea of a “structure formation” takes on a new meaning, the kernel of Stockhausen’s idea persists in the manifestation of electronic superimpositions. Today, Solo occupies a seminal position in the repertoire of live electronic music involving the recording, playback and processing of sound from an instrumentalist(s) during concert performance.
Originally from Edmonton, Canada, composer Mark Nerenberg completed a Bachelor and Master of Music at the University of Alberta, studying with Howard Bashaw, Malcolm Forsyth and Laurie Radford, and a Doctor of Musical Arts in Composition at the University of Toronto, studying with Christos Hatzis. His compositions encompass a wide range of styles, genres and techniques, including instrumental and vocal music, electronic music focusing on an interaction between computers and performers, and collaborative multimedia works. Recent works include Awakening the Electronic Forest (multimedia installation premiered at Toronto’s Nuit Blanche 2007), Lines (sound installation premiered at the 2008 Ottawa International Chamber Music Festival) and Ales (for soprano, recorder, viola, speaker and live electronics), commissioned by the Bird Project.
Maximilian Marcoll is supported by the Berlin Senate Cultural Affairs Department.
Mathew Hills, Michael Palumbo and performers are supported by EaSt, Department of Music, Concordia University, Montréal.
New Adventures in Sound Art (NAISA) would like to acknowledge the support of the SOCAN Foundation.