CEC

Social top

English

Toronto International Electroacoustic Symposium 2015

ABSTRACTS AND BIOGRAPHIES

Call for Submissions
Schedule Summary | Directions | Organisation
Detailed Symposium Schedule
Abstracts + Bios | Programme Notes + Bios
TIES 2015 event / info summary (PDF)

Day 2 — Thursday 20 August

09:45–11:15 • Paper Session #1: Listening

Venue: Canadian Music Centre
Chair: Alexa Woloshyn

Gliding Slowly: The interrelationship between body, voice, listening and the environment

by Wendalyn Bartley

The focus of this presentation will be an exploration of listening to the body in interaction with specific environments through the lens of my compositional process. How do we attune our bodies to a given environment in order to enter into the specific characteristics of any given location? What role does the voice play in this process of attunement and why is the voice the ideal antennae? I will address these questions in relation to two specific sites and the compositions that arose out of my interactions with these places, while also addressing the unique relationship between both the voice and body, and the voice and environment upon which my explorations are dependent.

The first site is the Hal Saflieni Hypogeum, located in Malta. This 5000-year old underground labyrinthine chamber is a UNESCO site with acoustic properties that have a visceral effect on the body, and is structured in such a way as to support a form of sound diffusion throughout the entire three-story chamber. I will incorporate my Hypogeum field recordings and compositional excerpts as part of my presentation. The theme of the second site will be urban-based natural environments, including the Don River watershed in Toronto and Stanley Park in Vancouver. Soundscape recordings from these locations and my voice-based interaction with these environments will form the basis of my new interactive work Gliding Slowly, commissioned by NAISA through the OAC to be performed at this year’s Sound Travels Festival of Sound Art. The process behind this work entails a probing into the “body of origin”, the body of birth before it has been socialized out of its own knowledge of itself. The wild body living now within a tamed environment. The body voice psyche engaging with the forces of nature.

Wendalyn Bartley is a Toronto-based composer and electrovocal sound artist whose work combines influences from electroacoustic and concert music, extended voice work, improvisation, soundscape and Deep Listening practices. Her vocal explorations have been primarily influenced by the European Roy Hart tradition, renowned for its unique and pioneering work with the 8-octave voice, the relationship between the voice and psyche, and cultivating a performance connecting body, breath and voice with expressivity. Her recent CD of electrovocal compositions, Sound Dreaming: Oracle songs from ancient ritual space, is created from vocal and soundscape recordings made at ancient temple and cave sites in Greece, Crete and Malta and is mixed in 5.1 surround sound. Her artistic vision is to create works that reveal the multiple stories that the voice and body hold, and to create work that contributes to ways of being that are in alignment with nature.
http://www.awakeningyourvoice.com

A Dialogue Between the Seen and the Heard: The live use of sound as a sculptural material and sculpture as a sound instrument in Cuboid

by Ben Nigel Potts

In this paper the multidirectional relationship between sound and sculpture is explored relative to Cuboid. Cuboid is the author’s visual sculpture and a sound art composition to be performed live and in real time, using the visual sculpture as an instrument for sound generation and compositional stimuli. The sounds of the metallic sculpture are processed, manipulated and diffused in space by the use of a laptop. The resulting sounds intend to form an extension of the visual sculpture creating one multimedia and multi-sensory structure. The author argues that for sound to gain a sculptural identity, it needs two aspects: space and physicality.

Firstly, Cuboid is influenced and inspired by the use of space in the sculptures of Richard Serra and Dan Flavin. Their works frame or activate specific shapes of the larger gallery space for the viewer to engage with. This engagement seeks to provoke awareness and to change the perception of the space. The work of the author intends to use sound as the material that frames and changes the space. Secondly, physicality can be achieved through sound resonating with objects or the human body.

Works by Maryanne Amacher and Jacob Kirkegaard intended to cause the listener’s cochlea to emit a frequency of its own origin. These responses are called distortion product otoacoustic emissions (DPOAEs). Cuboid uses this phenomenon as a method to heighten the physicality of the piece. The author’s notions on how visual sculpture can be used as a compositional tool conceptually is discussed by drawing links between Manuella Blackburn’s compositional application of Dennis Smalley’s spectromorphology (2009 and 1997, respectively), and 1960s minimalist sculpture. The author details the use of sculpture as a score in Cuboid.

Ben Nigel Potts is an artist exploring the relationship between sound and sculpture. He focuses on using sound as a sculptural material by investigating its physical and spatial potential and effect on the human senses. He also works with visual objects and using their sonic and music capabilities. This year will see Ben perform at Electric Spring Festival and present papers at The University of Leeds and The Toronto International Electroacoustic Symposium. Ben is currently completing his PhD at the University of Huddersfield under the supervision of Professors Michael Clarke and Monty Adkins.
http://www.benpotts.co.uk

Hearing Electronic Voices

by Ryan Olivier

In this paper I will explore the heightened experience of metaphorical exchange through multimedia by examining Jaroslaw Kapuscinski’s work, Oli’s Dream. The starting point will be the expansion of visual enhancement in electroacoustic compositions due to the widespread availability of projection in concert halls and the multimedia expectations created through 21st-century Western culture. With the use of visual representation comes the potential to map musical ideas onto visual signs, creating another level of musical cognition. The subsequent unfolding of visual signifiers offers a direct visual complement and subsequent interaction to the unfolding of aural themes in electroacoustic compositions.

This paper will explore the current research surrounding metaphorical thematic recognition in electroacoustic works whose transformational processes might be unfamiliar, and which in turn create fertile ground for the negotiation of meaning. The interaction of media and the differences created between the various signs within the music and the visual art create a heightened concert experience that is familiar to and in many ways expected by contemporary listeners.

Ryan Olivier grew up in the southern United States and attended Loyola University New Orleans College of Music. He recently earned his doctoral degree in music composition from Temple University where he studied composition and multimedia with Maurice Wright and Matthew Greenbaum. His work has been featured at various festivals across the US and internationally in countries such as Iceland, Taiwan and the UK. While Ryan enjoys composing for both traditional concert ensembles and fixed media, his current compositional and research focus is the incorporation of real-time interaction between live concert performers and visualized electronic music. For his work in this field he was recently awarded the Ruskin Cooper Outstanding Student Paper Award by the Mid-Atlantic chapter of the College Music Society. Ryan Olivier currently teaches courses in music at Temple University and St. Joseph’s University in Philadelphia.
http://www.ryanolivier.com

11:30–12:30 • Paper Session #2: Algorithmic Creation

Venue: Canadian Music Centre
Chair: Richard Windeyer

Audiovisual Coherence and Physical Presence: I am there, therefore I am.

by Dr. Louise Harris

This paper documents and discusses my personal audiovisual practice to date, in particular my recent attempts to bring a complex, largely algorithmic, fixed media method into a live, improvisatory performance context. Historically, my audiovisual work has been primarily concerned with making fixed media compositions that attempt to exhibit no sense of media hierarchy — i.e. the audio and the video function as part of a unified, cohesive system, what John Whitney described as a “complementarity” and Bill Alves has subsequently referred to as the “digital harmony of sound and light”. The fixed nature of these works, existing as delimited audiovisual artifacts to be installed in a gallery or played back during a concert, were an essential aspect of this attempt at cohesion; an attempt to limit additional demands on the audioviewer’s perception. This paper considers how these ideas concerning perceptual unity are shaped by the introduction of a physical presence into the audiovisual space and how the sensory cohesion I have sought in my practice is altered by my physical bodily presence within a live performance context.

Louise Harris is an electronic and audiovisual composer, as well as a Lecturer in Sonic and Audiovisual Practices at the University of Glasgow. Louise specialises in the creation of audiovisual relationships utilising electronic music and computer-generated visual environments. Her audiovisual work has been performed and exhibited nationally and internationally, including at AV Festival (Newcastle, 2010), Musica Viva Festival (Lisbon, 2011), NAISA’s SOUNDplay festival (Toronto, 2011), Strasbourg Museum of Modern Art (2012, 2013, 2014), Piksel Festival (Bergen, 2012, 2013, 2014), Linux Audio Conference (2013, 2014), Festival Novelum (Toulouse, 2013), Sonorities Festival (Belfast, 2014, 2015) and Sweet Thunder Festival (San Francisco, 2014).
http://www.louiseharris.co.uk

Musical Behaviours in the Transformation Engine

by Bruno Degazio

This paper describes recent software development that attempts to address a deficiency in current musical composition software — the limited ability of such software to apply algorithmic processes to the practice of musical composition. Commercial musical composition software, such as Cubase and Logic, has become caught in the long-dominant paradigm of the multi-track tape recorder, whereby musical composition is expressed as the layering of recorded performances. By contrast, commercial animation software such as Autodesk Maya, Adobe After-Effects and Apple Motion provide sophisticated tools for the algorithmic creation of moving images. They do so with great precision and flexibility, and without limiting non-algorithmic “hand-crafted” details. I have attempted to remedy this problem through the implementation of “musical behaviours” in my composition software, The Transformation Engine.

Musical behaviours are small-scale algorithmic processes applied to individual sections of a composition. Each behaviour is directed toward a specific musical target such as pitch, dynamic or duration. Behaviours range in complexity from simple “shapes” such as a straight or wavy lines to physics simulations such as gravitational acceleration, to chaotic processes such as the Logistic Equation and the Lotka-Volterra system for simulating biological species interactions. Multiple behaviours may be applied simultaneously with a cumulative effect and the total number of sequential behaviours in a single composition is unlimited. The remainder of this paper describes the implementation of musical behaviours and demonstrates some of their common compositional uses.

Bruno Degazio is a film sound designer, composer, researcher and educator based in Toronto. His film work includes the special effects sound design for the Oscar-nominated documentary film The Fires of Kuwait, and music for the IMAX films Titanica and CyberWorld 3D as well as many other IMAX films, feature films and television dramas. As a researcher in the field of automated music composition he has presented papers and musical works at leading international conferences, including festivals in Toronto, Montreal, New York City, London, The Hague, Berlin, Tokyo and Hong Kong. Bruno Degazio is the designer of MIDIForth and The Transformation Engine, software systems with application to automated composition. He is currently Professor of Digital Tools in the BA Animation program of Sheridan College (Ontario).
http://www-acad.sheridanc.on.ca/~degazio

14:30–16:00 • Keynote Lecture

Venue: Wychwood Theatre
Host: Kevin Austin

Semiconducting: Making music after the transistor

by Nicolas Collins

Why does “Computer Music” sound different from “Electronic Music”? Nic Collins examines several traits that distinguish hardware from software in terms of their application in music composition and performance. He discusses the often subtle influence of these differences on various aspects of the creative process, and presents a number of inferences as to the “intrinsic” suitability of hardware and software for different musical tasks. His observations are based on several decades of experience as a composer and performer, and were made in close engagement with the music of his mentors and peers. The talk will be illustrated with examples from his own work.

New York born and raised, Nicolas Collins spent most of the 1990s in Europe, where he was Visiting Artistic Director of Stichting STEIM (Amsterdam) and a DAAD composer-in-residence in Berlin. An early adopter of microcomputers for live performance, Collins also makes use of homemade electronic circuitry and conventional acoustic instruments. He is editor-in-chief of the Leonardo Music Journal, and a Professor in the Department of Sound at the School of the Art Institute of Chicago. His book, Handmade Electronic Music — The Art of Hardware Hacking (Routledge), has influenced emerging electronic music worldwide. Collins’ indecisive career trajectory is reflected in his having played at both CBGB and the Concertgebouw.
http://www.nicolascollins.com

16:30–17:30 • Lecture-Recital

Venue: Wychwood Theatre
Host: Michael Palumbo

Cross-Media Transcription as Composition

by James O’Callaghan

This lecture-recital will present several compositional strategies for “transcribing” written acoustic music from electroacoustic sources and vice versa, particularly considering its relationship with large-scale musical form. As composers are increasingly working in electroacoustic and acoustic media, cross-pollination between the two is more common, and recent examples of composers creating written “transcriptions” or “arrangements” of acousmatic works (or the converse, acousmatic works based on written works) provoke a number of interesting theoretical questions.

The paper will provide a brief survey and cursory analysis of works generated in this manner, including Philippe Leroux’s M (1997, for chamber ensemble and electronics), M.É (1998, acousmatic) and m’M (2003, for orchestra), Robert Normandeau’s Le rénard et la rose (1995, acousmatic) and Baobabs (2012–13, for chamber ensemble and electronics), and Gordon Fitzell’s violence (2001, chamber ensemble) and evanescence (2006, for chamber ensemble electronics).

Following this introduction, the paper will illustrate some practical concerns and consequences for musical form stemming from the author’s own compositional practice. This will include discussions of the works Isomorphic, Isomorph and Isomorphia (2013, acousmatic, orchestra, and orchestra and electronics, respectively), IF:IFF (2014, chamber ensemble and electronics) and Empties-Impetus (2015, acousmatic) and pre-echo (after empties) (2015, for string quartet), as well as a complete performance of Empties-Impetus.

Finally, through a discussion of the author’s compositional practice, the presentation will examine software-assisted approaches to transcription, the development of musical structure both intertextually and infratextually as part of the transcription process, the æsthetic consequences of developing a dialectic framework between a self-imposed process and intuitive realisations, and conceptual issues relating to cross-media similarity/difference, mimesis and self-quotation.

Empties-Impetus (2015) is the final work in a trilogy of acousmatic pieces that imagine the sounding bodies of instruments as interior spaces. Following Objects-Interiors (2013) concerning the piano, and Bodies-Soundings (2014) concerning an acoustic guitar and toy piano, Empties-Impetus examines the instruments of the string quartet as resonant spaces. The piece attempts to navigate the instruments as bearers of meaning, variously by grappling with the historical weight of the idiom and the recognisable quality of its timbres, and also by subverting that meaning by reassessing the objects according to their physical construction, spatial properties and different environmental contexts. Like the other works in the trilogy, a version of the work exists where the sound is partially diffused with transducers through the physical instruments on stage as resonant bodies.

James O’Callaghan is a composer and sound artist based in Montréal. His music intersects acoustic and electroacoustic media, employing field recordings, amplified found objects, computer-assisted transcription of environmental sounds and unique performance conditions. His works have been performed across North America, in Europe, New Zealand and Japan, and he has received commissions from the Groupe de Recherches Musicales, the National Youth Orchestra of Canada, Standing Wave Ensemble, Ensemble Paramirabo and Quasar quatuor de saxophones. His music has been awarded prizes including the Grand Prize of the SOCAN Foundation, first prizes in the Jeu de temps / Times Play Awards and Musicworks’ electronic composition competition, and a JUNO nomination. He received a Master of Music degree from McGill University in 2014, studying with Philippe Leroux, and a Bachelor of Fine Arts degree from Simon Fraser University in 2011, studying with Barry Truax.
http://www.jamesocallaghan.com

Day 3 — Friday 21 August

09:45–10:45 • Paper Session #3: Education/Pedagogy

Venue: Canadian Music Centre
Chair: Linda Antas

Moving Beyond the Weird, Creepy and Indescribable: Pedagogical principles and practices for listening to electroacoustic music in the general education classroom

by Alexa Woloshyn

Electronic beeps, synthesizer melodies and theremin portamento conjure up images of robots, aliens, space and a dystopian technological future. This association with science fiction has a significant legacy (Hayward 2004; Laudadio 2011), including theremin in The Day the Earth Stood Still (1951), the first all-electronic soundtrack for The Forbidden Planet (1956) and the synthesizer in Solaris (1972). This association also has ideological precedence in the link of technology with the “mystical and visionary” (Collins and d’Escrivan 2007) in both early electronic music and science fiction, and in “electronic sound’s indexical relationship to intangible spaces, both cosmic and psychological” because they are “necessarily concealed, remote, detached” (Leydon 2004). By 2015, given the abundance of non-electronic science fiction soundtracks — such as A.I. Artificial Intelligence (2001), Prometheus (2012) and Interstellar (2014) — we should be transcending these automatic connections. Not so, it seems, in the ears of my undergraduate general education students. Exposure to electroacoustic music recently in class and at a faculty composers concert resulted in characterizations of the music by these students as “creepy” and “like a science fiction soundtrack,” when, to my ears, the sounds had warmth and beauty, with many musical aspects we had been discussing in acoustic examples.

Following a brief history of electronic music in science fiction, and a synthesis of perception and agency in electroacoustic music (d’Escrivan 2006; Emmerson 2007; Smalley 1996 and 1997), this paper outlines pedagogical strategies for removing students’ aural and conceptual barriers to electroacoustic music, providing a vocabulary, and facilitating critical listening through activities and assignments based primarily on Elizabeth Barkley’s Classroom Engagement Techniques (2009) and Judy Lochhead’s principles of phenomenological listening (1995). The paper will demonstrate this pedagogical approach to listening using Hugh Le Caine’s Dripsody (1955) and three remixes released on the Canadian Music Centre’s Centretracks.

Alexa Woloshyn received a PhD in musicology from the University of Toronto and a master’s degree in musicology from University of Western Ontario. Her research analyzes contemporary electroacoustic music, in particular works by Canadian composers that feature the recorded human voice. Additional research interests include the music of Inuit throat singer Tanya Tagaq and DJ/producer collective A Tribe Called Red. Recent articles on Robert Normandeau and Hildegard Westerkamp have appeared in eContact! and Circuit. She has presented at national and international conferences, including the Toronto International Electroacoustic Symposium, the Society for Teaching and Learning in Higher Education, St. Thomas University’s Teaching and Learning Centre, IASPM-US, the Canadian University Music Society and the Art of Record Production. She is currently Visiting Instructor in 20th-Century Music at Bowling Green State University in Ohio.
http://bgsu.academia.edu/AlexaWoloshyn

Exploring the Musical Thinking of Middle School Students Composing Electroacoustic Music: Reflections from a qualitative case study

by Jeffrey Martin

Composition is becoming an important feature of general music education. Research emerging over recent decades has supported the developing interest in student composition, affording insights on process, product and the teaching-learning environment. However, electroacoustic (or sound-based) composition has been largely under-represented and under-researched, despite recent initiatives to develop curricular tools for this form of music making. This study used a qualitative case-study approach to examine the situated musical thinking and meaning-making of a purposeful sample of individual grade seven students at a middle school in Eastern Canada. The project comprised both an instructional and a research component, the former involving researcher-led sessions introducing sound-based music and composition techniques, followed by supervised individual student composing in the computer lab, assisted also by the school music specialist. A combination of observation, interviewing and analysis of sketches saved by each student throughout the entire process produced detailed case profiles of three students, which offered an in-depth, interpretive look at their individual approaches to, and perspectives on, electroacoustic composition. In addition, analysis revealed a number of themes common across the cases, concerning working method, approach to structure and overall purpose.

Jeffrey Martin is a music educator and composer who, in 2012, was appointed assistant professor at Mount Allison University to teach music education and music technology. Previously he taught at the China Conservatory in Beijing and, from 2005 to 2009, was Head of Music at Yew Chung International School, Pudong Campus (Shanghai). His compositions include electroacoustic pieces for fixed medium, scores for performance and collaborations with film directors, choreographers and visual artists. He has published articles in refereed journals such as Research Studies in Music Education, Organised Sound, General Music Today, and the International Journal of Music Education, as well as book chapters.

11:00–11:30 • Paper Session #4: Mirlitones

Venue: Canadian Music Centre
Chair: Brian Garbet

Mirlitones

by Peter Bosch

When audiences enter the NAISA Space they will see seven to twelve white PVC pipes suspended in the air that are swinging in a pendulum-like fashion. The pipes are covered at the top end with a membrane and their motions are controlled by pneumatics so that they produce low buzzing tones in the space. Mirlitones was commissioned by DordtYart (Dordrecht, Netherlands) for the exhibition Kunst Werkt (April to September 2012). The title refers to a primitive instrument that has appeared in a multitude of forms in various parts of the world. All these instruments exist of a hollow form with a membrane mounted on it can be brought into vibration by blowing or singing. The best-known member of the family is the kazoo used until present times in pop music. The point of departure for the work was the spectacular noise produced by children with minuscule plastic mirlitones in the cavalcade of the Funeral of the Sardine, the final act of the biggest fiesta in the Spanish town of Murcia.

The artists would like to thank Günter Geiger for his assistance in the development of the software for Mirlitones. The presentation of Mirlitones is supported by Acción Cultural Española and Fonds Podiumkunsten / Performing Arts Fund NL.

Since 1990 the Bosch & Simons artist duo have created sound installations — often using pneumatics — in museums, at international symposiums and in concert halls around the world. The Krachtgever is their best-known piece for its Golden Nica, received in 1998 at the Prix Ars Electronica (Linz). Other projects are Cantan un Huevo, awarded at the 29th Competition of IMEB Bourges (2002), or Aguas Vivas, which obtained a mention at VIDA 6.0, Madrid (2003). In 2009, a retrospective exhibition of their work was held at La Tour du Pin, curated by GRAME (Lyon, France). In 2012, they premiered Mirlitones as part of the Kunst Werkt exhibition in DordtYart, Dordrecht and Wilberforces at “Winter Sparks,” FACT (Liverpool). In 2013, Mirlitones was featured at the International Computer Music Conference in Australia. Peter Bosch (1958) studied psychology at the Universities of Leiden and Amsterdam (1976–83) and thereafter studied sonology at the Royal Conservatory in The Hague (1986–87). Simone Simons (1961) studied at the audiovisual department of the Gerrit Rietveld Art Academy in Amsterdam (1980–85). Since 1997 they work and live in Valencia (Spain).

11:30–12:30 • Panel Session

Venue: Canadian Music Centre
Chair: Brian Garbet

“Fluidity” in Current Sonic Practice: Pedagogical and practical perspectives

by Louise Harris and Nick Fells

This session will focus on “fluidity” as a metaphor for the paradigmatic shifts in electroacoustic practices currently taking place, alongside the pedagogic and practical implications of these for practice-based researchers within higher education.

A recent call for submissions to the journal Organised Sound, themed “Punkacademia, oppositional culture and the post-acousmatic in electroacoustic music,” points to current tensions and opportunities in technologically mediated creative practice, particularly in its relation to education institutions. The call suggests “There is both no better and no worse a time to be creating digital media.” We take a slightly different perspective, asking what conditions would represent a “better” or a “worse” time to be creating digital media — and are we really post-anything? Terms such as “post-digital” and “post-acousmatic” variously gain currency, seeking perhaps to illuminate the contemporary through mapping out practice in relation to its immediate past. This panel asks — were we ever conscious of a “digital” or “acousmatic” condition which we are now “post”, or has simply a fluidity of which practitioners have long been aware (of style, genre, media) become increasingly visible? We propose that a rediscovery of performance has rendered the “acousmatic” as merely a state of mind or point of audition, mirrored in our ability to freewheel through the sonic internet, which itself can be considered a mode of performance. In discussing how the global sharing of creative work has replaced the “canon” with chains of transitorily related media experiences, we aim to illuminate how such fluidity affects our teaching of musical-technological practices.

Nick Fells: The performative aspects of making electroacoustic music suggest a fluid continuum between “live” and “fixed” work. By convention we have regarded the acousmatic as fixed work, “cinema for the ear”, distanced somewhat from performance. On the other hand, our enthusiasm for performance technology risks fetishizing interactivity at the expense of listening. Might these be reconciled through the idea of “playing with playback” — an idea with particular appeal when considering teaching sonic arts practices? In his introductory teaching, Fells considers playing with playback as a form of instrumentalising or “creative abuse”. He considers the microphone as instrument, instant replay and “live phonography”. Showing examples of live processing in Max foregrounds the digital, and the acousmatic becomes a condition dependent on point of audition, a way of listening. In this situation, the digital and the acousmatic reinforce one another, representing not so much a “post” condition as a hybridisation.

Louise Harris’ perspective as an early career audiovisual composer with a somewhat diverse educational and practical background will inform her contribution to this panel. Having come relatively late to working with technology, as she focused on acoustic composition until midway through her PhD, her understanding of and experience with technologically-mediated creative practice was initially largely formed outside of the context of academia, through a very close engagement with contemporary practice and perhaps less concerned with the traditions and conceptual ideologies associated with electroacoustic and acousmatic music. This affords a certain fluidity in approach, with respect to the influences she draws upon, both as a practitioner and as a lecturer. Additionally, she will consider the concept of fluidity as it relates to her own practice, which crosses disciplinary boundaries, and consider what her self-designation as an audiovisual composer articulates about her perspectives on the visual and auditory components of her work.

Louise Harris is an electronic and audiovisual composer, as well as a Lecturer in Sonic and Audiovisual Practices at the University of Glasgow. Louise specialises in the creation of audiovisual relationships utilising electronic music and computer-generated visual environments. Her audiovisual work has been performed and exhibited nationally and internationally, including at AV Festival (Newcastle, 2010), Musica Viva Festival (Lisbon, 2011), NAISA’s SOUNDplay festival (Toronto, 2011), Strasbourg Museum of Modern Art (2012, 2013, 2014), Piksel Festival (Bergen, 2012, 2013, 2014), Linux Audio Conference (2013, 2014), Festival Novelum (Toulouse, 2013), Sonorities Festival (Belfast, 2014, 2015) and Sweet Thunder Festival (San Francisco, 2014).
http://www.louiseharris.co.uk

Nick Fells is a composer based in Glasgow, Scotland. He is mainly concerned with refining improvisation with recorded sound and working with other performers to hone source materials and approaches. Primarily he strives to nurture delicacy in technologically mediated sound work whilst maintaining a “body” of sound. He teaches at the University of Glasgow, where he coordinates Master’s and PhD programmes in Sonic Arts as well as an undergraduate degree in Electronics with Music. Recent pieces have included ps[c]yched, for string quartet, electronics and bicycles, composed for the Glasgow Commonwealth Games cultural festival, Sublimation for Scottish Opera’s Five:15 series, and Rifts, a wavefield synthesis surround sound work made for Sony’s Creative Lab in Tokyo and remixed for the Game of Life system in Den Haag. He is a founding member of the Glasgow Improvisers Orchestra and co-directs the web archive/label project Never Come Ashore.
http://nickfells.net

16:30–17:30 • Lecture-Recital

Venue: Wychwood Theatre
Host: Michael Lukaszuk

“Mockingbird”: Confessions of an abstracted aural documentary

by Brian Garbet

Social criticism and the function of music as a means of political expression have been present in contemporary art music since at least the 19th century. A compelling and clear means of communication is the tradition of the anti-establishment or protest song, which is universal. There exists a pathway of electroacoustic studio techniques and discoveries that led to an increased presence of environmental and socio-political commentary within fixed-media composition. These provide a foundation as well as a trajectory for an emerging socially engaged sub-genre of acousmatic composition. Building on the traditions of the radio documentary paradigm and combining it with the transformational language of electroacoustic music is an approach to my work, which I refer to as “abstracted aural documentary”. This involves a hybridity of metamorphic techniques and unconventional influences such as literary devices and specific documentary film techniques. This derivation of cinematic elements and literary techniques towards a provocative narrative is the foundation of this approach to this sub-genre. My current æsthetic involves a focus on environmental and socio-political content.

In this lecture-recital I will discuss and perform the octophonic abstracted aural documentary, Mockingbird. The sonic material of this conceptual work consists of field recordings and found sound left both untouched as well as transformed. Compositional strategies with sound material and subject matter that guide the narrative and develop the form will be examined. The symbolic use of spatialization, localization and space will also be addressed.

Brian Garbet has composed acoustic and electroacoustic music for film, theatre and concert. While studying at Simon Fraser University (Vancouver), he was a Jeu de Temps / Times Play prizewinner with his composition Ritual. He has received airplay and performances across Canada, the United States, New Zealand and Finland. After years of touring and recording with the rock band Crop Circle, Brian completed his Master of Music at University of British Columbia and recently began work towards his PhD at the University of Calgary. Currently working with Laurie Radford, he has also studied with Barry Truax, Hildegard Westerkamp, Rodney Sharman, Bob Pritchard, Keith Hamel and Allan Bell.

Day 4 — Saturday 22 August

09:45–10:45 • Paper Session #5: Musicking

Venue: Wychwood Theatre
Chair: Adam Tindale

Touch, Device, Performance

by Nick Fells

This paper presents a personal approach to making sound work, viewing “liveness” from a combined media-archaeological and social perspective. The starting point is Small’s notion of musicking, where making sound together socially is seen as a way of affirming cultural belonging and a necessary human behaviour. Through making sound together socially, we practice human relationships both verbally and non-verbally through paralanguage, gesture and bodily engagement — even without making sound at all, but simply attending to sound bodily and cognitively in listening. Since prehistory, tools and technologies were brought to bear on the processes of utterance and listening, leading eventually to the emergence of sound recording, as mapped out for instance by Jonathan Sterne. Now, with an immense legacy of sound media artefacts, new realms exist for social-sonic interplay. This paper describes a personal relationship to sound media in terms of a “revealing of the hidden voices of things”: essentially a form of media archaeology described by Wolfgang Ernst as a “microphysical close reading of sound”, where machines embody “a kind of media knowledge waiting to be unfrozen.” This “microphysical close reading of sound” suggests the acousmatic, a term that for a time implied fixity: sound rendered to a fixed medium, distanced from the physical improvisatory act that gave rise to it. This paper argues rather for liveness: the making manifest of the relationship of sound with action, of utterance with utterer, as a public and socially necessary act. It considers the word “device” in relation to performance, through its dual meaning as both a mechanism and as a plot or trick. It considers a number of live works, leading to view of performance as a way of engaging with devices that attains social significance through the capacity to reveal, to produce a narrative that unfolds over time.

Nick Fells is a composer based in Glasgow, Scotland. He is mainly concerned with refining improvisation with recorded sound and working with other performers to hone source materials and approaches. Primarily he strives to nurture delicacy in technologically mediated sound work whilst maintaining a “body” of sound. He teaches at the University of Glasgow, where he coordinates Master’s and PhD programmes in Sonic Arts as well as an undergraduate degree in Electronics with Music. Recent pieces have included ps[c]yched, for string quartet, electronics and bicycles, composed for the Glasgow Commonwealth Games cultural festival, Sublimation for Scottish Opera’s Five:15 series, and Rifts, a wavefield synthesis surround sound work made for Sony’s Creative Lab in Tokyo and remixed for the Game of Life system in Den Haag. He is a founding member of the Glasgow Improvisers Orchestra and co-directs the web archive/label project Never Come Ashore.
http://nickfells.net

Costumes for Cyborgs: New body experience in sound and movement

by Izzie Colpitts-Campbell

Costumes for Cyborgs: Sound creates performative articulation of the wearer’s movement, producing sound through biofeedback. By placing devices on our bodies, the piece asks the wearer to engage physically with Haraway’s myth of the cyborg (1991), blurring lines between wearer and interface. In this engagement we allow for shifts and augmentations of the relationship to bodies, as well as to create space for an expansion of the idea of body via non-biological tools. Piezoelectric elements pick up the physical vibration of the body. The sound feedback produced is a smoothly shifting frequency generated by an accompanying computer that modulates in response to the intensity and speed of the vibrations. The volume is set at a point which allows the presence of the device to fall in and out of consciousness, providing the experience of wearing the device to be acknowledged by, as well as amalgamated with, the body.

Drawing on conversation between Marco Donnarumma, Claudia Robles and Peter Krin in Performing Biological Bodies (Lopes and chippewa, 2012), Costumes for Cyborgs: Sound plays in the boundaries between intentional and unintentional biodata. The piece joins this conversation by exploring the possibilities in living with these devices in contexts outside of a performance setting. Equal parts instrument and fashion object, the “costume” considers how future wearables may augment the experience of the action of sound. Through the bodies production of atypical sounds, the goal of Costumes for Cyborgs: Sound is to engage all participants in the activity that Christopher Small calls “musicking” (1998): exploring the relationships that arise in sound, music, expression and communication.

Izzie Colpitts-Campbell is a software and electronic artist whose work includes wearable electronics and physical computing, as well as game design and programming. She is completing a BFA in integrated media from OCAD University with a minor in Wearable Tech. She’s been exhibiting her work since 2008, most recently as part of the AGO’s First Thursday as well as co-authoring a paper “Monarch: Self-Expression Through Wearable Kinetic Textile” as part of the art track at TEI’15 Conference on Tangible, Embedded and Embodied Interaction at Stanford University. Izzie currently works as a research assistant at the Social Body Lab at OCAD University and as an indie game developer in Toronto. She also sits on the board of directors at Dames Making Games Toronto, a non-for-profit supporting women in games, as co-chair of the events and outreach committee.
http://izziecolpitts.com

11:00–12:30 • Lecture-Recitals

Venue: Wychwood Theatre
Chair: Bruno Degazio

Punchcard Rewind: Real-time audience participation meets electroacoustic music performance in a recent performance of Udo Kasemets’ “Tt (Tribute)”

by Richard Windeyer and Adam Tindale

The presenters will chronicle their involvement in a recent staging of Tt — Tribute to Buckminster Fuller, Marshall McLuhan, John Cage, a cybernetic audience-controlled, audio-visual performance piece by Canadian composer Udo Kasemets, presented as part of Opening Up the Space: A Festival of Music and Theatre co-organized by T. Nikki Cesare Schotzko (Centre for Drama, Theatre and Performance Studies; University of Toronto), Dennis Patrick (Faculty of Music; University of Toronto) and David Schotzko (independent artist and scholar) in March 2015. Originally composed in 1968, Tt is an audiovisual presentation of the words, ideas and images of Buckminster Fuller, Marshall McLuhan and John Cage, in which each audience member registers his/her preferences regarding “the quantity, mode, intensity and character… [of] the sound/light environment on specially prepared computer cards which are then processed by a computer team to provide data to the operators of audio-visual equipment and other performers who according to a prepared graded code control the light/sound environment and the presentation of selected information.”

The presenters will demonstrate the methods of data generation and collection strategies developed specifically for this performance, including the curation of performance materials/media and necessary mapping strategies, as well as the integration of “lo-tech” computer punch cards and real-time (manual) data entry with more conventionally contemporary computational tools such as Max/MSP and Open Sound Control. The presentation will also include accounts of post-performance feedback as received from audience members regarding their subjective experiencing of participation. In discussing this work, the presenters hope to open up the discourse around the nature of participatory models and real-time generative systems in the performance of electroacoustic music, past, present and future.

Richard Windeyer is an electronic musician, composer, percussionist, sound designer and researcher focusing on the integration of generative systems and sonic interaction design methodologies in participatory performance. He was a co-founder of the Canadian Association for Sound Ecology and an adjunct professor of music technology and electroacoustic music at Wilfred Laurier University. He is currently a doctoral student at the University of Toronto’s Centre for Drama, Theatre and Performance Studies and the Knowledge Media Design Institute.
http://www.richardwindeyer.com

Adam Tindale is an electronic drummer and digital instrument designer. He is an Associate Professor of Human-Computer Interaction in the Digital Futures Initiative at OCAD University. Adam performs on his E-Drumset, a new electronic instrument that utilizes physical modeling and machine learning with an intuitive physical interface. He completed a Bachelor of Music at Queen’s University, a Master’s of Music Technology at McGill University and an Interdisciplinary PhD in Music, Computer Science and Electrical Engineering at the University of Victoria.
http://www.adamtindale.com

Turning the Tables: The audience, the engineer and the acousmatic violin

by Chris Mercer and Rodolfo Vieira

Traditionally, once a composer has committed notes to paper, the compositional process is complete. How can we change this paradigm, so that an audience member can actively participate in the creative process during an actual performance? Being able to manipulate, sculpt, shape and create new soundscapes at a whim, and to both react and instigate, transforms the audience member into a performer on the spot. Even without prior experience or musical knowledge, the audience member can become a partner in the music making. The ability to collaborate with an artist at this intimate level is a unique experience, one that has not been available to the majority of the concert-going public, until now. Such inclusivity across domains is a fascinating way to foster the emergence of new ideas, unleashing unheard-of creative potential. It opens the door to fresh collaborations and novel experiences that cannot be reproduced digitally without losing critical dimensions.

Chris Mercer received a PhD in composition at the University of California, San Diego in 2003. His principal teachers were Chaya Czernowin and Chinary Ung (instrumental music), and Peter Otto and Roger Reynolds (electronic music). He has held artist residencies at Experimentalstudio des SWR, Künstlerhaus Schloss Wiepersdorf and Sound Traffic Control (San Francisco). His music has been performed by Ensemble SurPlus, SONOR Ensemble and Schlagquartett Köln. His recent electroacoustic music and research focus on animal communication, especially nonhuman primate vocalization, and he has undertaken research residencies at the Duke University Lemur Center, Wisconsin National Primate Research Center and the Brookfield Zoo. His instrumental music involves modified conventional instruments, found objects and instruments of the composer’s own design, with amplification, live electronics and spatialization. He has taught electronic music at UC San Diego, UC Irvine and CalArts; he currently teaches music technology at Northwestern University.
http://musictechnology.music.northwestern.edu/mercer/home.html

Rodolfo Vieira was a recipient of the 100 Young Creative Talents of the European Union in 2009, and the Búzio Revelation Prize from Portugal, and was a prizewinner at the Julio Cardona International Competition and RDP2 Prémio Jovens Músicos. He has performed as a soloist with the ERA orchestra (Chicago) and Lisbon’s Academic Metropolitan Orchestra, and has appeared in solo and chamber music recitals in Europe and the Americas. Rodolfo served as the concertmaster of the Conservatory Project Orchestra at the Kennedy Center (Washington) and as assistant concertmaster of the Civic Orchestra of Chicago under the direction of Pierre Boulez and Bernard Haitink. Vieira appeared at the Ravinia, Lucerne Academy and Oviedo festivals, and brought IRCAM’s technology to Northwestern University to perform Pierre Boulez’s Anthèmes II for solo violin and live electronics. Rodolfo teaches at the Music Institute of Chicago Academy and is a senior software developer at Northwestern University’s Advanced Media Production Studio.

Social bottom