CEC

Social top

English

Toronto International Electroacoustic Symposium 2016

Detailled Symposium Schedule

Call for Submissions
Schedule Summary | Timetable (PDF) | Directions | Organisation
Detailed Symposium Schedule
Abstracts + Bios | Programme Notes + Bios
TIES 2016 event & info summary (PDF)

Day 1 — Wednesday 10 August 2016

13:00–14:00 • Lecture-Recital #1

Venue: Canadian Music Centre

Kazimierz Serocki’s “Pianophonie” (1978) for piano, electronic sound transformation and orchestra: Reproducing old analogue sound transformation devices using the Max/MSP and Max for Live environment

Kazimierz Serocki (1922–1981) was a leading Polish avant-garde composer and a co-founder of Warsaw Autumn, one of the most important international contemporary music festivals worldwide. His last work Pianophonie was commissioned by the SWR Radio in Baden-Baden and performed in the Heinrich Strobel Experimental Music Studio in Freiburg. The work was written for piano, electronic sound transformation and orchestra.

Serocki treats the electronic layer as one of the orchestra’s instruments, using live electronics to enrich his colour palette, which must have been a point of special interest for a composer so deeply sensitive to sound colour. Electronic transformation is limited to the piano, whose sounds are captured by three microphones and directed to the sound transforming apparatus. The process of sound modification is controlled by the sound engineer, whereas the pianist can operate two generators producing tones that correlate with the piano part in accordance with the pianist’s decisions. A characteristic feature of Pianophonie is the use of atypical techniques of sound production on the instrument. The extremely virtuosic piano part poses many technical challenges for the soloist.

We succeeded in bringing this fantastic composition back to life and reconstructed the complicated process of sound transformation and all of the devices and techniques used in the composition.

Working in the Max/MSP environment, Kamil Kęska was able to exactly reproduce the effect of all the analogue devices (including the Halaphone, which imitated the movement of sound in space by sending signals to the individual speakers placed in the auditorium). Thus the whole team of technicians necessary to operate the vast analogue apparatus has been eliminated, all the functions have been transferred to Kamil’s computer — which has become the “brain” of the electronic part — and one complicated algorithm represents all the devices and their specified behaviour in every section of the composition. The pianist regulates the frequencies of the two tone generators included in the apparatus on stage near the piano (laptop using Max for Live). We adhered to Serocki’s original concept of a soloist who himself modifies the sound of the piano from the stage.

In our lecture we want to share our work and explain the complicated process of recreating Serocki’s idea of sound transformation and old analogue apparatus that was used in the composition. During the presentation we will show how the piano is transformed and how the system works. We will also show samples and videos from our performances in Poland (Warsaw National Philharmonic Hall, National Philharmonic Orchestra conducted by Jacek Kaspszyk at the 57th International Festival of Contemporary Music Warsaw Autumn in 2014). Finally, we will perform the cadence from Pianophonie (around 8 min.), which represents the highest form of artistry in the composition.

Adam Kośmieja graduated from the Manhattan School of Music as a piano student of Solomon Mikowsky (2011), where he was awarded the Harold and Helen Schonberg Piano Scholarship (2007–11). Currently Adam is working on a PhD in piano performance under Katarzyna Popowa-Zydron and Jerzy Sulikowski at the Academy of Music in Bydgoszcz (Poland). He constantly promotes new music and explores the possibilities of combining it with electronic music. Adam Kośmieja has performed as a soloist in New York (Carnegie Hall's Weill Recital Hall, Yamaha Concert Artist's Hall), Chicago, Los Angeles, Japan, China, France, Spain, England, Sweden, Finland, Italy and Czech Republic. His new music projects include 3xPiano (2009–11), which combines classical, jazz and electronic piano music, prepared in cooperation with American jazz pianist and 2014 Grammy Nominee Christian Sands. Since 2014 Kośmieja works together with Polish composer Stefan Weglowski on minimal music with the use of piano and live electronics.
http://adamkosmieja.com

Kamil Kęska is a music producer and faculty members of the composition, music theory and sound Engineering department at the Academy of Music in Bydgoszcz (Poland). Kamil is a creator and performer of live electronic music and also a producer of classical, contemporary and alternative music recordings. He has collaborated with artists such as: Karin Hellqvist, Wojciech Jachna, David Krakauer, Stefan Weglowski, Bartosz Koziak, Dobromila Jaskot, Michael Dobrzynski, Andrzej Bauer, Mikolaj Trzaska and Jenny Q Chai. Since 2006 Kamil co-creates and implements projects and workshops designed for students, such as 100% Sound in Film (Polish Film Institute). In 2011–12 Kamil worked as a sound engineer at the Department of Pathophysiology Organ of Hearing and Balance System on the Department of Otolaryngology and Oncology at the Collegium Medicum in Bydgoszcz. Kęska is a recipient of multiple distinctions and scholarships awarded by The President of the City of Bydgoszcz and Rector of the Academy of music in Bydgoszcz.
http://www.amuz.bydgoszcz.pl/wykladowcy/wykl-kamil-keska

14:00–15:00 • Lecture-Recital #2

Venue: Canadian Music Centre

Contraction Point

Contraction Point is a meta-composition that integrates a human agent, a musical instrument, a performance space and a feedback delay network system. Two interconnected feedback processes take place in the “here and now”. The live sound of the instrument is recorded and played back by 12 spatialized variable delay lines. The sound of the delay lines is physically mixed in the acoustic space, recorded by the microphone and played back again in a continuous flow. The resulted sound textures are unexpected and unrepeatable and can be interpreted as emergent phenomena of this non-linear complex feedback process. The gestures of the performer are extended in time, space and frequency, which are naturally interconnected by the feedback delay network. The process can be theoretically interpreted as the scattering of sound inside a 10 km long multidimensional room, with its faces moving in variable constant speeds creating transpositions through Doppler effects.

In a parallel process, the performer makes 12 listening walks in order to locate the speaker with the higher transposed delay line. After returning to the instrument, the performer plays the estimated note (notes and speakers are predefined in a fixed relationship: speaker 1 = C, speaker 2 = C#, speaker 3 = D, etc.). The system evaluates the input note and contracts the transposition range of the delay lines accordingly. The delay lines are then redistributed randomly in space with the only constraint that the delay line with the highest transposition will appear in all 12 loudspeakers. Sound is the only interface that interconnects the human agent with the digital system. Essential interaction is achieved, since the performer listens to the output of the system and acts accordingly while the system tracks the performer’s replies and changes the parameters of its internal states.

After the 12th evaluation the system freezes the range contraction and reduces the window time of the delay lines. The emergent effect is the loss of space perception that is gradually transformed into timbre perception. Theoretically, the 10 km long room contracts to a tiny space of a resonant body of a musical instrument. The achieved game score describes the final speeds of the faces of the multidimensional resonant body, which is heard as pitch-shifted resonance. In every performance a different game score will be achieved, leading to different resonant timbres. If the performer achieves a perfect score (which has so far not happened in any rehearsal or concert), the transposition of all delay lines will be zero and we will get the normal amplification resonance.

Kosmas Giannoutakis creates dynamic sound artworks by interconnecting human agents, sound bodies, acoustic sites and digital audiovisual systems through the medium of sound. Using feedback mechanisms in order to create complexity and to control non-linearity, he researches the catalysis and communication of emergent sound phenomena. Giannoutakis’ work has been presented in various festivals and workshops, such as inSonic2015 and next_generation at the ZKM (Karlsruhe), Sound Islands Festival / 2nd International Symposium on Sound and Interactivity (Singapore), Gaudeamus Muziekweek (Utrecht), REAL/UNREAL BEAST FEaST 2016 (Birmingham), klingt gut! Symposium on Sound (Hamburg), GENERATE! Festival für elektronische Künste (Tübingen), EUROMicroFest in E-Werk (Freiburg), XXIX Summer Sounds Festival (Finland), the Avaton Music Festival (Cyprus) and the 7th International Workshop for Young Composers (Mazsalaca, Latvia). The Institute of Electronic Music and Acoustics (IEM) of the University of Music and Performing Arts Graz is the current inspiring environment for his interdisciplinary art experiments.
http://www.kosmasgiannoutakis.eu

15:30–16:30 • Lecture-Recital #3

Venue: Canadian Music Centre

Integrating Electroacoustic Techniques into Theatrical Performance

Over the past 10 years or so, I have worked on a variety of projects that, for the most part, were theatrical in a large or small way. I brought my electroacoustic interest and training to these projects and have, over time, developed some useful techniques and tools. I will share these techniques and tools by showing examples of their use in two specific projects as well as speak generally about the how and why they were used.

Carey Dodge is a multidisciplinary artist whose work involves sonic arts, interactivity, installations, sound design, projection design and performance. He specializes in developing novel live sound and projection systems for performance and installation work. These systems often include live performance, custom-made software, algorithmic composition, live processing, surround sound environments and interactivity. Carey has a keen interest in creating new and exciting immersive experiences. His collaborations, individual work and research have taken him across Canada, to France and the UK. Since 2011, Carey has been a Board Member of the Canadian Electroacoustic Community (CEC).
http://www.careydodge.ca

Day 2 — Thursday 11 August

09:30–10:30 • Paper Session #1

Venue: Canadian Music Centre

Franco Donatoni: Quartetto III

This paper provides an analysis of Quartetto III, composed by the Italian musician Franco Donatoni. It is Donatoni’s third quartet and, unlike the other quartets, it was produced under the guidance of Marino Zuccheri at the Studio di Fonologia in Milan, using only electronic instruments. We’ve studied the historical, musical and technological context in which this work was conceived by using different historical sources, such as texts for broadcasting, documents and letters from the archives of the Studio di Fonologia, as well as notes and documents preserved at Paul Sacher Stiftung. We have analyzed Quartetto III under different points of view, by using stereophonic and quadraphonic versions of this work — recording “E018” and “Q002” respectively. In particular, we have pointed out the relationship between ministructure and macroform, underlining the progressive aggregation process, from “elements” to “groups” and “columns.” This objective has been achieved by means of:

  1. A partial Genetic Analysis by using PWGL;
  2. A Listening Analysis, by following different musicological approaches such as Chion and Delalande, Denis Smalley’s spectromorphology, Stéphane Roy’s functional analysis, the temporal semiotic units (TSU) proposed by the Laboratoire Musique et Informatique de Marseille (MIM), and Sloboda and McAdams’ perceptive and cognitive studies.

This approach can give us some information about the macrostructure: Quartetto III, which lasts about five minutes, is structured in panels which are sections with different metronomes, but with an internal coherence of articulation and musical development. Much attention has been paid to the structural and poetical use of quadraphonic space: Quartetto III seems to pave the way for the later electroacoustic works because of the use of spatial figures and “structured” electronic gestures.

Massimo Avantaggiato is an Italian sound engineer and composer. Since his mid-teens he has concentrated on expanding his musical landscape using electronics, unusual recording techniques and computer-based technology, all of which help him to develop his idea of sound and composition. He took a degree in Electroacoustic Composition with full marks at the Giuseppe Verdi Conservatoire in Milan and a degree as a Sound Engineer (Regione Lombardia). Finalist in some composition and video competitions, he has written music for short films and installations and also music for TV adverts. He has also recorded several CDs for various Italian and foreign labels.

Allusion and Timbre: A Theory of implicit reference, emotion and familiarity bias in contemporary music

Advances in computer technology have rapidly expanded the set of audio tools available to the contemporary music creator and have allowed access to a wealth of dormant information stored in previously recorded pieces of music, through the sonic imprint of production elements such as sound processors, microphones and the recording environment. Recent research in music psychology suggests that certain sound properties like timbre and amplitude envelope can prompt a listener to elicit specific emotions, while other research in cognition and perception indicates that familiarity can cause an implicit bias towards a new piece of recorded music or sound.

Through a brief examination of selections from popular music, I draw connections between existing research in these areas and outline a theory that suggests emotional triggers and implicit preferential biases may be instilled in new audio recordings by alluding to timbral properties of existing recordings. On a functional level, the references require no explicit knowledge of the new piece of music, operate on a subconscious level and are directly correlated with the audience’s listening experience.

I discuss positive and negative implications, motives and consequences of timbral referencing, as well as the ability to store additional information, including emotional properties, within a sound without altering traditional pitch, rhythm or harmonic components. Finally, I conclude that timbral traits can theoretically be used like a Trojan Horse for emotional content and extramusical information, and that the knowledge of this information provides a powerful and expressive tool for the contemporary music creator.

Jon Martin is a Canadian composer of electroacoustic and contemporary music currently residing in southern Alberta pursuing his Master of Music degree at the University of Lethbridge, with a research focus in music psychology and composition. His work explores aspects of music psychology, including musical expectancy, timbral recognition, perception, semiotics and cultural bias, and includes a permanent 12-channel installation at the world-class Royal Terrell Museum in Drumheller, Alberta. Other interests include long-form composition, spatialization, fixed-media sound installations, live performance, travel and culture, and advanced digital signal processing. He works as a musician, freelance music producer and as Graduate Teaching Assistant at the University of Lethbridge.
http://www.jonmartin.ca

10:45–12:15 • Paper Session #2

Venue: Canadian Music Centre

It’s Not an Instrument, It’s an Ensemble: Adventures in modern modular synthesizer design

Joseph Hyde will talk through a year-long project building a new modular synthesiser. These instruments, which last had prominence in the 60s and 70s, have had a spectacular resurgence in recent years, with many boutique manufacturers springing up offering a bewildering array of modules and parts, allowing the intrepid synthesis adventurer to build a bespoke system unique to them. Hyde’s system deliberately goes against many orthodoxies in this area: rather than attempting to evolve a coherent instrument, he is deliberately aiming for pluralism and diversity, building something closer to an ensemble than a single instrument, using an 8-channel model so that the audience can be surrounded by eight distinct and located voices. These voices are tuned with precise accuracy to intervals based on low-order natural ratios and controlled using a complex digital/analogue model — whilst the sound is pure analogue, the “engine” behind the scenes is entirely digital (and built from scratch), allowing a distinctive blend of old and new technologies. The talk will avoid unnecessary technicalities and will discuss the æsthetic decisions informing the project, the joys of natural tunings and the cultures emerging around modular synthesisers. It will also include plenty of illustrations and sound examples.

Joseph Hyde’s background is as a musician and composer, working in various areas but in the late 90s — and a period working with BEAST (Birmingham, UK) — settling on electroacoustic music, with or without live instruments. Whilst music and sound remain at the core of his practice, collaboration has since become a key concern, particularly in the field of dance. Here he works both as a composer and with video, interactive systems and telepresence. His solo work has broadened in scope to incorporate these elements. He has made several audiovisual works and has written about the field, recently completing a project on the unique musical notation used by animation pioneer Oskar Fischinger. Hyde also works as a lecturer / academic, as Professor of Music at Bath Spa University (UK). Since 2009 he has run a symposium on visual music at the university, Seeing Sound.
http://josephhyde.co.uk

Kinesia: The use of physical gesture as an expressive tool in electroacoustic composition

One of my main goals as a composer of electroacoustic music is to create intuitive means of interacting with the musical material I use in my compositions. This has led me to begin investigating gesture-based controllers that enable me to interact and engage with the sonic material in a natural and musical manner. Going beyond the QWERTY keyboard and trackpad opens up expressive potential that is sometimes neglected in computer music. Music is a multimodal experience and the lack of physical connection with musical materials can sometimes add a level of abstraction between the performer and the audience. I feel that it is extremely important in the performance of electroacoustic musical works to embody the sonic material through physical gesture, as it not only allows the performer to engage with the piece in a visceral manner but also helps the audience to immerse themselves in the music and to empathise with the performer. This paper examines the use of physical gesture as an expressive tool in electroacoustic composition. More specifically, it investigates how the physical movements of a dancer can be used to both trigger and manipulate sonic events within the performance of an electroacoustic composition. I will explain the methods that I used to map the movements of the dancer, how the source material was generated and briefly talk about the conceptual structure of an electroacoustic work that demonstrates these techniques in practice.

Shane Byrne is a composer of acoustic and electronic music and is currently a PhD researcher and Hume scholar at Maynooth University. His research is centred on interactivity and participation within electronic music composition. He has had a number of fixed media pieces and installations showcased at numerous events throughout the years, including the National Concert Hall in Dublin, the Hilltown music festival, SMC 2015, ISSTA 2014 and TIES 2015. His current work is focused on physical computing and the potential for human interaction, which has recently led him to investigate the potential for such interaction to facilitate and encourage learning amongst the learning impaired and the autistic community.
http://soundcloud.com/shane-broin

Live Sound Spatialization and On-Stage Feedback Control: Cues to develop an interaction between acoustic and electronic musicians

This paper presents the duet Jane/Kin, a project bringing together a saxophone player with a laptop/spatialist musician and focusing on chamber music relationship and improvisation. The duet investigates methods and cues to develop an interaction between acoustic and electronic musicians. The context involving two interpreters with specific sound practices leads us to two main questions: how to articulate our interaction and how to bridge our sonic realities in order to find coherence in our respective practices. Our departure point is to focus on live sound spatialization and on-stage feedback control. This allows the electronic musician to directly work with the acoustic sources, by orchestrating them in the speakers orchestra, whereas the acoustic musician uses on-stage microphones to create feedback, i.e. an electronic phenomenon. The paper presents our microphone setup with specific amplification and location generating different types of feedback, and the microphone assignations to the spatialization system of Montréal’s Usine C. Those setups allow us to bridge acoustic and electronic sources and to find cohesion between the improvised and the fixed-media parts of the project. Finally, the paper consigns the observations of the experience from the perspective of the two musicians. The saxophonist explains how new musical gestures are generated from amplification and the difficulty to have a clear perception of spatial trajectories. The spatialist explains the realization of three different spaces as new potentials to take into account: the space of the instrument, the space of the musician and the space of the loudspeaker orchestra. In light of this experience, our future project is to develop a more effective scenic layout in which both musicians could be positioned at a “sweet spot”, and to develop a monitoring tool for spatialization that will allow the saxophonist to consider spatial trajectories as a parameter for a chamber music interaction.

Founded in Montréal, Jane/Kin is the meeting ground of saxophonist Ida Toninato and electroacoustician Ana Dall’Ara-Majek. The duo plays on the verge of instrumental, electronics, drone zone, found sounds, etc. Ida Toninato holds a doctorate in performance from the University of Montréal. Exploring the domain of sound, ideas and forms in artistic presentation, Ida loves the act of questioning, and her work has been inspired by experiences of loss of control and the way that sounds roll off the tongue. Ana Dall’Ara-Majek is a composer and sound artist influenced by musique concrète, her training as a harpist and her experiences as a Foley artist. Recently graduated with a doctoral degree in composition from the University of Montréal, she is investigating the interaction between instrumental, electroacoustic and computational thinking in composition.
http://www.idatoninato.com | http://soundcloud.com/anadallaramajek

14:15–15:15 • Lecture-Recital #4

Venue: Geary Lane

The Amplified Tuba: Three sides of a new medium

Over the last few years, there have been a number of works written for the amplified tuba, with amplification methods that range from the simple (placing a microphone in the bell) to the more complex (affixing transducers to the bell and using the tuba’s resonance as an amplifier). This process is aided by the tuba’s natural properties as a magnifier of sonic activity — at its base level, the tuba is simply a 14-foot long metal amplifier. Through the course of this lecture, I will present three works that demonstrate the range of possibilities with the instrument. To start, I will be performing my own work breathing machine, which utilizes a microphone in the bell to amplify the sounds that occur “under the skin” of normal playing technique on the tuba. I will then present Monte Weber’s Colossus, a vocal and percussion work that happens to use the tuba as a performance vehicle. I will then end with Kurt Isaacson’s abscess, which uses two transducers to vibrate the bell of the tuba itself throughout the course of the work. In between each work, I will demonstrate the preparations needed for each work, and discuss some further avenues for amplifying and modifying the tuba. Given the young age of the tuba as a solo voice, there is still much that can be done with the instrument, and it is my hope that this lecture-recital will spur on the creation of many new works for the amplified tuba.

Ohio-based tubist and composer Aaron Hynds has been performing across the Midwest since 2003, always with an emphasis on contemporary music. He is the recipient of music degrees from the University of Northern Iowa and the University of Wisconsin-Madison, the latter of which he attended as a Distinguished Graduate Fellow. He also works as a composer, with current commissions including the harp and electronics work stained with sleeping wounds and a setting of Ben Marcus’ novel The Age of Wire and Strings for Pierrot ensemble and soprano, entitled The Weather Killer. Aaron is currently attending Bowling Green State University, where he is pursuing a Doctor of Musical Arts degree in Contemporary Music.
http://aaronhynds.weebly.com

15:30–17:00 • Keynote Lecture: John Oswald

Venue: Geary Lane

Electronicousmatics, and my involvement since 1956

An idiosyncratic and illustrated survey of some aspects of discovering, magnifying, recording, freezing, displacing, tuning, fragmenting and contemplating aural realms.

John Oswald is best known as the creator of the music genre Plunderphonics, an appropriative form of recording studio creation that he began to develop in the late sixties. This has got him in trouble with — and also generated invitations from — major record labels and musical icons. Meanwhile, in the 1990s he began, with several commissions from the Kronos Quartet, to compose scores, in what he calls the Rascali Klepitoire, for classical musicians and orchestras, including b9 (2012–13), a half-hour condensation of all Beethoven’s Symphonies. He also improvises on the saxophone in various settings, dances and is a visual media artist and chronosopher, best known for the series Stillnessence. He’s a Canadian Governor General’s Media Artist Laureate. His multifaceted sonic clock, A Time To Hear For Here, is a permanent environment at the Royal Ontario Museum in Toronto. In 2016 he’ll be in residence in California and Umbria, presenting a concert in total darkness as part of 21C at Koerner Hall in Toronto, and, with Scott Thomson, filling Parc La Fontaine in Montréal with performance.
http://www.pfony.com | http://www.6q.com | http://www.plunderphonics.com | http://www.6q.com/obs

Day 3 — Friday 12 August

09:30–10:30 • Paper Session #3

Venue: Canadian Music Centre

Sounding Riddims: King Tubby’s Dub in the context of soundscape composition

A significant body of academic literature and music journalism has examined the historical trajectory of Jamaican dub music and its influence on house and hip-hop music. Ethnomusicologist Michael Veal proposes that, in purely sonic terms, dub is comparable to certain works of electronic composers (e.g., Karlheinz Stockhausen and John Cage) who subjected pre-recorded musical materials to electronic manipulation. However, no studies to date have situated dub in relation to soundscape composition. This paper seeks to position the compositional methods of Jamaican dub innovator King Tubby alongside those of Canadian soundscape composers Barry Truax and Hildegard Westerkamp. How are notions such as active listening and contextual meaning (whether in relation to musical or environmental soundscape perception) implied by Tubby’s creative studio employment of mixing “dropouts”, frequency EQ and tape delay echo? This paper does not attempt to identify æsthetic and stylistic similarities between Tubby’s dub music and soundscape composition. The author instead proposes that Truax, Tubby and Westerkamp’s compositional techniques directly address the following three conceptual themes: referential composition and the invocation of past listening associations through sonic abstraction, timbral play as a means of linking sound processing to acoustic communication, and the evocation of environmental motion cues by way of ecologically informed sonic manipulation. These conceptual parameters arguably link Tubby’s music to soundscape compositional approaches and differentiate it from the purely abstract and non-referential methods upheld by the acousmatic electroacoustic tradition.

The author, whose own musical practice explores the intersections between dub and soundscape composition, created a dub remix “version” of a previous soundscape composition Unseen Songlines (2011) he produced using field recordings collected in the Brazilian Amazon rainforest. For his paper presentation at TIES 2016, the author will play short excerpts from this dub piece and comment on how it directly addresses the conceptual themes highlighted in this paper.

Nimalan Yoganathan is a Montréal sound artist and musician whose work interweaves hip-hop, dub and soundscape compositional æsthetics. His rhythmic electronic textures complement sculpted field recordings from his travels through Arctic Quebec, the Brazilian Amazon rainforest and the bustling cities and temples of Southeast Asia. Nimalan has performed internationally at festivals and venues including Soundasaurus/Arts Commons (Calgary), Signal & Noise (Vancouver), Flausina (Lisbon), MUTEK (Montréal), Pop Montréal, Suoni per il Popolo, and OBORO. He holds a BFA in Electroacoustic Composition (Concordia University) and is completing his MA in Media Studies (Concordia University). His current academic research investigates the musical, social and political resonances of environmental sound, as well as the ethical implications of sampling and composing with field recordings of both human and non-human soundscapes.
http://nimalanyoganathan.wordpress.com

Copy-Paste Aesthetics, Distributed Creativity and the Circulation of Max for Live Devices in Online Communities

Every code of music is rooted in the ideologies and technologies of its age, and at the same time produces them.
—Jacques Attali

The 2009 integration of Max and Live into a single interface — Max for Live (M4L) — signalled a significant convergence for two historically distinct music software paradigms: patch programming languages and digital audio workstations (DAW). Having built its reputation as a “blank canvas” capable of containing near-infinite musical possibilities, Max’s role in this asymmetrical coupling is to augment the functionalities of Live, giving the user greater control in creating custom effects, instruments and hardware configurations. And yet, despite this do-it-yourself affordance, many users rely instead on M4L devices that circulate in online repositories, forgoing the arduous task of patching together code from scratch in favour of readymade solutions. What is the significance of this practice in contemporary electronic music culture?

This paper explores how the circulation of M4L devices provides a decentralized model for both sharing and selling encapsulated bits of music-theoretical knowledge within a virtual community. In many cases, M4L devices are marketed by individual artists or institutions, enabling end users to align themselves with a constellation of tools that project a particular æsthetic affiliation. This lateral exchange of devices engenders a copy-paste æsthetic that rubs against traditional notions of authorial agency and musical autonomy, suggesting an alternative view of creativity as a process that is distributed across a network of potential actors. As a result, there has been an enlarged scale and accelerated pace of musical cross-pollination within these device-centred communities. Additionally, the M4L model has reconfigured existing relationships between software users and makers, opening up a marketplace that is predicated on tapping users as a newfound mode of production. In this paper, I will explore the nature of black-boxed æsthetic propensities within M4L devices and examine the political economy driving the exchange of these devices within online communities.

Landon Morrison is currently pursuing a PhD in music theory and working as a course lecturer at the Schulich School of Music of McGill University in Montréal. Additionally, he is a member of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT). His current research interests include analyzing contemporary musique mixte, theorizing approaches to musical time in the digital era, and studying the impact of software design on compositional process and musical product.
http://www.landonmorrison.com

10:45–12:15 • Paper Session #4

Venue: Canadian Music Centre

Microtonality, Technology and Dramatic Narrative in Manfred Stahnke’s Operas

In employing microtonality and technology to shape the dramatic narrative of the theatrical music, one of the key protagonists who have established microtonal structures and technological tools as substantial devices to create musical drama is the contemporary German composer Manfred Stahnke. This paper explores the ways in which microtonality and technology has become a paramount means to facilitate the mediation of essential philosophical, mythical and psychological connotations of the drama in the major operas of this Hamburg-based composer. As case studies, I will expound upon the role of microtonality and technology in Stahnke’s Der Untergang des Hauses Usher (1981), Wahnsinn, das ist die Seele der Handlung (1982), Heinrich IV (1986) and Orpheus Kristall (2002).

Since the beginning of the twentieth century, microtonal structures have become pertinent compositional vehicles in hands of many composers who have attempted to transcend the limited scope of the twelve-tone equal temperament. However, the musicologists have not yet explored the ways in which the compositional tools of microtonality and technology have functioned in constructing drama and delineating the fundamental implications of the musical narrative, in this case in Manfred Stahnke’s theatrical music. In this paper, I analyze Stahnke’s approach to engaging digital media, electronics and philosophy in these four operas, which enhance the function of his microtonal structures to construct dramatic narrative. These operas are successful examples of multimedia artworks that draw upon ancient myth, philosophy and psychology in order to address issues related to cultural and personal identity, while shedding light on the subtle amalgamation of technology and microtonality.

Navid Bargrizan is a PhD candidate in musicology at University of Florida, pursuing a cognate in composition. He has presented papers on intersections of music, technology and philosophy at several conferences in USA, Canada, Germany, Austria and Turkey. His articles, reviews, interviews and papers are published in Müzik-Bilim Dergisi: The Journal of Musicology, the SCI Newsletter and in the proceedings of Conferences in Berlin and Istanbul. As a composer, Navid experiments with microtones, tunings, intonations and electronics. His music is performed in such venues as New York City Electroacoustic Music Festival, Midwest Graduate Music Consortium, Collapss/Stacks Concerts (Greensboro, North Carolina), Florida Contemporary Music Festival, Unbalanced Connection Electroacoustic Concerts and SCI Student Chapter Concert at UF. Navid has received a 2015 DAAD Award, a 2016 UF Doctoral Research Award, and UF’s 2015 Best of College of Arts Award for his saxophone duo, 10 Aphorisms.
http://www.navidbargrizan.com

Xenakis’ “Orient/Occident”: Some alternative analyses

This paper provides some alternative analyses of Orient/Occident that allow us to underline some specific ideas of this composition and the composer’s originality in Xenakis’ production and in the history of electroacoustic music. We’ve investigated some aspects of similarity among the constituent sound materials, illuminating the temporal relationships existing among them, exploring sound identity correspondences and variations and providing a taxonomy of recurrent phenomena to help to rationalize compositional structuring processes.

By reading the acoustic surface, we tried to get some coherent images, indicate some sound objects by following the vocabulary of the composer or other techniques (Schaeffer, Bayle) and find some structural relationships and overall meanings (TSU). We’ve tried to point out the relationship between ministructure and macroform, underlining the progressive aggregation process using paradigmatic charts and generative trees. This main objective has been also achieved comparing the composer’s sketches (Genetic Analyses) with the results from the listening analysis, by following different musicological approaches: Schaeffer and Bayle’s approach, Denis Smalley’s spectromorphology, Stéphane Roy’s functional analysis, the temporal semiotic units (TSU) proposed by the Laboratoire Musique et Informatique de Marseille (MIM), Clarke’s paradigmatic approach, and Sloboda and McAdams’ perceptive and cognitive studies. For each analysis we made an “hors-partitur”, which is a good visual support for listeners, and created layers of sonograms to compare different analyses.

In Orient/Occident Xenakis showcases various techniques of layering and modelling sounds. We divided the work into sections, characterized by a certain timbral and dynamic profile’s homogeneity. Perceptive and cognitive studies and the resulting analyses, coordinated and summarized in different tables, allowed us to have a full understanding of possible strategies for a live performance on Acousmonium.

Massimo Avantaggiato is an Italian sound engineer and composer. Since his mid-teens he has concentrated on expanding his musical landscape using electronics, unusual recording techniques and computer-based technology, all of which help him to develop his idea of sound and composition. He took a degree in Electroacoustic Composition with full marks at the Giuseppe Verdi Conservatoire in Milan and a degree as a Sound Engineer (Regione Lombardia). Finalist in some composition and video competitions, he has written music for short films and installations and also music for TV adverts. He has also recorded several CDs for various Italian and foreign labels.

Before Their Time: Revisiting indeterminate composition in the digital age

Indeterminate musical compositions offer a provocative æsthetic model for using technology to mediate and intensify the relationship between the composer, the performers and the score in real-time performance. The indeterminate techniques of three composers — Earle Brown, John Zorn and Walter Thompson — will be treated as case studies for implementing a notation-based real-time computer-assisted composition (CAC) system. This process will entail representing performers’ realization of musical instructions as a constraint satisfaction problem and using a pseudorandom algorithm to make certain decisions in real time within different constraint sets. In addition, simultaneous constraints can be represented as vectors in an abstract musical space, presenting a new paradigm for performance via real-time navigation of the space. The practicability of these model implementations will be measured against the composers’ stated and implied æsthetic priorities. In addition, the author’s own system, Indra, will be evaluated using the same framework.

Drake Andersen is a composer whose work encompasses acoustic and electroacoustic music for diverse performing forces of all sizes and categories, collaborative projects for dance and theatre, site-specific installations and interactive electronic environments. Through the use of technology, including interactive software and new musical interfaces, his creative work engages literature, mathematics and the physical world. His compositions have been performed at venues throughout the United States and Europe, including Symphony Space, the Park Avenue Armory, New World Symphony Center, Teaterhuset Avant Garden (Trondheim) and the Irondale Center. Andersen is frequently engaged as a sound designer for theatre and dance, as an electronic music specialist for contemporary music ensembles and as an improviser with live electronics. Andersen’s principal composition teachers include Nils Vigeland, Joel Chadabe and Marjorie Merryman. He is currently a student in the PhD program in Music Composition at The Graduate Center, CUNY.
http://www.drakeandersen.com

14:15–15:15 • Special Session

Venue: Geary Lane

Acousmatic Music as a Medium for Information: A Case study of the acousmatic documentary “Archipel”

This talk addresses the acousmatic documentary genre, with a focus on the strategies implied to overcome — or benefit from — the dichotomy between figurative sound elements (specifically, voice and field recordings) and their abstract representations within the context of a sonic artwork with informative aims. This relationship will be discussed through the study of Archipel (Côté/Campion, 2016), a 29-minute acousmatic documentary that explores the interactions between the St. Lawrence River and the citizens of Montréal. Initially mainly concert-oriented, the project is currently undergoing a second phase of development, focussing on the elaboration of a mobile application and an interactive website, both based around and expanding further on the issues addressed in the concert work.

The presentation, focusing mainly on the concert work, will not only expose the problematics and particularities of the documentary approach in a sound-based and musical context, but also address the relation between the contemporary urban city (in this case Montréal), and the access to nature (the St. Lawrence River). Through an alloy of interviews with politicians, fishermen, urban designers, citizens and NPC workers, extensive sound field recordings gathered along the St. Lawrence’s riverbank, as well as more abstract, synthesized sound materials, Archipel aims to merge the poetic aspect of acousmatic music with facts, in order to expose one of the most critical aspects of modern urban life and ecology: the relation between major cities and their surrounding natural environments.

This session will be held in two parts: after a 20-minute talk-and-question period about the project and acousmatic documentary approach, the composers will perform a multi-channel diffusion of the entire 29-minute concert piece.

Archipel is a 30-minute electroacoustic piece based on a documentary approach. Driven by a desire to address concrete informative facts through the poetry of music, the composers’ creative process (documentation of the subject through research and interviews; location scouting and recording of multiple and varied sounds and soundscapes; editing, composing and performing the work in phase with documentary æsthetics) draws openly on the work of filmmakers in the tradition of “direct cinema” — such as Pierre Perrault or Robert Drew in the 1960s — whose influence is not often heard of in the electroacoustic genre. It is through this innovative approach that the duo tackles the complex relationship between the island of Montréal, upon which is built Québec’s largest city, and the St. Lawrence River, a majestuous stream strongly symbolic of the province’s history and identity.

Duo Côté/Campion: Guillaume Côté and Guillaume Campion. Formed in the course of their graduate studies in electroacoustic composition at the University of Montréal, the duo of Guillaume Campion and Guillaume Côté proposes a first common work with their acousmatic documentary Archipel (2016). Stemming from the alloy of the composers’ complementary æsthetics, the sonic territory explored by the duo lies at crossroads of acousmatic music and radio documentary. Driven by a desire to address concrete, real-life issues through the poetry of music, the composers’ creative process draws openly on the works of film and radio documentarians such as Pierre Perrault, Chris Marker, Yann Paranthoen or Christophe Deleu. The duo thus aims to instil tangible social reach and relevance into their music.
http://soundcloud.com/guillaume-campion

Day 4 — Saturday 13 August

09:30–11:00 • Paper Session #5

Venue: Geary Lane

Live Coding the Mobile Voice

Embracing the Voice is a performance piece that focuses on diverse transformation of the voice during a live-coded performance. In particular, this piece is interested in bridging the gap between the performer and the audience by exploring spatialization techniques. What opportunities and challenges do mobile devices present for the spatialization of live-coded sound? What kinds of transformations can be applied to vocalized sounds during live coding performance? What are the possibilities for presenting the voice in a spatialized live coding performance?

Live coding has allowed for new creative potentials in the relationships between computer programming and the development of musical composition. In particular live coding has allowed new artistic methods of communication between performer and audience, as well as collaborative networking between performers. The principle goal of this research project has been to explore the æsthetic and communicative possibilities of the electroacoustically transformed voice. In my paper presentation I will review the research that inspired the creation of Embracing the Voice, discuss the different design considerations for the transformations of the voice, and lastly present a brief, live-coded demonstration of the performance to the audience.

Tanya Goncalves is an artist-researcher who has a focus in audio programming, live coding and electroacoustic composition. Her work explores a curiosity for sound and the complex relationships between computer programming and the development of musical composition. She is currently a graduate student at McMaster University (Hamilton), where she is completing an MA in Communication and New Media. She holds a BA in Communication Studies and Multimedia. Tanya works closely with McMaster University’s Cybernetic Orchestra, a laptop orchestra that specializes in live coding in order to produce intricate improvisations and compositions. She is also a co-founder and the VP of Relations and Communications at MacGDA (The McMaster Game Development Association), a student-run organization dedicated to creating video games and teaching others about video game design. Her most recent research project consisted of the creation of a web audio programming language called WASP.
http://tanyamgoncalves.com

Radiophonic Arts and the Problem of the Stage

By the middle of the 20th Century, composers of electronic and electroacoustic music had begun to reenvision the concert hall. Buttressed by theories of acousmatic sound articulated by Pierre Schaeffer and his associates at the Groupe de recherches musicales (GRM) in Paris, loudspeakers replaced the hidden orchestras of the 19th Century. The orientation of the listener, however, would evolve much more slowly. As an alternative to the tradition of electronic sound production, which continued to organize its audiences in relation to the stage, this talk looks towards radiophonic art as cultivating a different mode of “live” listening and in turn a different relationship between audience and work. The novel transmission method made possible by wireless radios also created a space, institutional and otherwise, for experimentation and rethinking (or circumventing) this tradition. Though the discourse of music, in particular, continues to be mired in the values of stage production (authenticity, virtuosity, spectacle), the ubiquitous experience of radio listening created new imaginaries and new sonic spaces that have yet to be fully realized. I argue that radiophonic work offers a paradigm for sonic arts in general to finally leave the stage behind. I consider a selection of work including the Hörspiele of Luc Ferrari, entries to the Prix Italia, and more recent compositions for radio outlets such as Kunstradio and Radio Papesse. I also analyze a variety of approaches to live performances, from the “anti-acoustics” of Éliane Radigue, to Francisco López’s blindfolds and Marcus Fischer’s multi-channel concerts.

Joseph Sannicandro is a writer and scholar currently based in Minneapolis, where he is pursuing a PhD in Cultural Studies. Outside of the academy, he regularly blogs and publishes art and music criticism. Past work has dealt with Italian social movements, Hannah Arendt and online education, radiophonic arts and a history of New York’s immigration through the lens of its public works projects. He also records under the moniker the new objective and under his own name. The cassette Les Rumeurs de la montagne rouge, en chœur, convergent (2014, Howl Arts) utilizes field recordings from the 2012 Québec student strike to explore the relationship between art and activism. An album of processed field recordings made in Mexico City was recently released on the Galaverna label.
http://noiseeconomy.wordpress.com

11:15–12:45 • Paper Session #6

Venue: Geary Lane

Proposing an Application for Binaural Beating in Timbre Modulation

While binaural beating is most commonly employed in spatial music and brain entrainment routines, the true potential of this psychoacoustic phenomenon within more traditional methods of electronic composition is often overlooked. A number of sizable foundation studies have provided an understanding of binaural beating in relation to its scientific nature; however, significantly less is known regarding its application within timbre modulation in music.

Timbre modulation can be explored through a combination of varying localization cues alongside spectral components including bandwidth, formant regions, periodic frequency content and phase properties. One of the primary elements of the perception of harmonic and inharmonic material is bandwidth phenomena, which also plays a key role in the perception and creation of binaural beating. By considering these factors it would appear that a direct link can be made between timbre modulation and binaural beating.

This paper considers a number of linear models that are intended to present the composer with an outline of the potential results of binaural beating on the listener when considering timbre modulation. These results are obtained by investigating the relationship between multiple parameters of the stimuli and the perceptual processes between the inner ear and the brain. These models will consider the following properties of binaural beating: intracranial motion, amplitude modulation, phase sensitivity and critical bandwidth.

With references to the author’s own compositional material, this paper demonstrates the potential for employing such research in relation to the timbral modulation of both complex periodic and inharmonic sound sources.

Brian Connolly is a composer, sound artist and final year PhD student at Maynooth University with research interests in the application of psychoacoustic phenomena concerning the non-linearities of the inner ear within composition. Brian has composed the music for Keith Barry’s The Dark Side tour as well as having written and presented the RTÉ Lyric FM documentary Why Music Can’t Stay Still. In the past 18 months the composer’s research into the ear as an instrument has been accepted for inclusion in programmes with Music Current, SMC and ISTCC (Ireland), Sonorities and NI Science Festival (Northern Ireland), INTER- (Scotland), SSC, INTIME and BEAST FEaST (England), ASA and FEASt Fest (USA), MUSLAB (Mexico) as well as the 2015 Toronto International Electroacoustic Symposium (TIES).
http://www.soundcloud.com/brianconnolly-1

Sonifying Tidmarsh

Typical approaches to sonification include the one-to-one mapping of data sets to the pitch and amplitude structures of digital musical instruments for the purpose of auditory display. As an intended departure from the conventional, Sonifying Tidmarsh Living Observatory explores the interplay between an improvising musician and historical environmental data, with a focus on how data sets can be used to sculpt timbre, space and time in the creation of original electronic music. In this presentation, the author will demonstrate a sonification system designed in Max/MSP that allows a user to request and map environmental data from the past and present. The author will demonstrate how data sets from the Tidmarsh Living Observatory representing barometric pressure, humidity and illuminance may be mapped to control the timbral, temporal and spatial development of musical structures. The system demonstration will also include discussion on emerging performance strategies and software abstractions that allow the user to loop and control the tempo and playback direction of groups of data points. The author will also discuss future work, including the application of data from all of the available sensor nodes in the Tidmarsh Living Observatory to a High-Density Loudspeaker Array (HDLA) in order to create mass shifting timbral and spatial patterns within a larger performance space.

Richard Graham is a guitarist and computer musician based in the United States. He has performed across the US, Asia, UK and Europe, including festivals and conferences such as Celtronic and the International Symposium on Electronic Art. He has composed music for British and US television, recorded live sessions for BBC radio, and his music has been authored for the popular video game, Rock Band. Graham’s academic research is centred on computer-assisted music composition and instrumental performance. He developed the first iteration of his live performance system for multi-channel guitar as an artist-in-residence at STEIM in 2010. In 2012 he received his PhD in Music Technology from Ulster University and he is now an Assistant Professor of Music and Technology at Stevens Institute of Technology in Hoboken, New Jersey. His most recent papers on performance systems design and electroacoustic theory were presented at NIME, EMS and ISSTA in 2015, and pieces from his most recent album, Nascent, featured at SEAMUS, NYCEMF and ICMC in 2015. Graham’s most recent works featured at SEAMUS 2016.
http://rickygraham.com

Convergence of Set: A Very general technique for automated musical decision-making

This paper presents Convergent Array, an algorithm to converge and diverge sets of numeric data. Within the context of parametric musical systems, the algorithm provides a way to modulate musical (ir)regularity — to deterministically “change the die” used to indeterminately select sound parameters. Particular attention is paid to using the algorithm to control large-scale form, directionality and emergent counterpoint in the author’s recent acousmatic piece, What Rough Beast Slouches? In the context of this work, convergence becomes an overarching paradigm for electroacoustic voice-leading. The hierarchical implementation of the algorithm emphasizes the priority of “leading” over the identification of “voice”. This priority is a basis for discussing alternative contexts in which Convergent Array may be applied — to change the appearance of what appears, regardless of material. Additionally, the paper calls to question the tension between more intelligent models for generative music and simple generic tools, such as Convergent Array; the extensibility of convergence of set and its hierarchical implementation undermines the assumed benefit of more intelligent models. Convergent Array illustrates how a simple, mechanistic and anti-humanist tool for automated decision making may, nevertheless, yield music that draws listeners into a web of hermeneutic, humanist prerogatives.

Sean Peuquet is an independent composer, digital artist, scholar, programmer and music hardware developer based in Denver. He presents his work regularly at national and international venues like SEAMUS, ICMC, NYCEMF, SCI, TIES and EMM. From 2012–14 he held the position of Visiting Professor of Digital Arts at Stetson University while completing his PhD in Music Composition at the University of Florida. He received his MA from Dartmouth College and holds a BA from the University of Virginia. His current research interests include generative music, self-reflexive listening practices and new paths for art as a socio-cultural determinant. In addition to his creative and scholarly work, he has co-founded two Denver-area startups: RackFX, which provides a web-based interface for analogue signal processing, and CauseART, a platform for cultivating relationships between artists and businesses.
http://ludicsound.com

14:15–15:15 • Lecture-Recital #5

Venue: Geary Lane

Sharing the Studio in the Creation of “Lépidoptères”: A Study in collaboration and notation

My co-composition of Lépidoptères for recorders and electronics with Monty Adkins offers a window onto collaborative practice and notation using electroacoustic methods and tools. In this lecture-recital, I present and perform two parts of Lépidoptères in which we used different fixed and real-time strategies. These will provide specific examples show how the sharing of sonic material and digital audio tools and environments — using the studio as a shared instrument — challenges a traditional paradigm of instrumentalist plus composer. I will also present the video scores that I created for the performance of these works, which, while highly personal and specific to both the works and instruments, offer ideas and avenues for notation that make use of an understanding of software and sound editing that is increasingly common among electroacoustic performers. In showing how these scores function and then demonstrating their use in performance, I hope to suggest possibilities for a broader and more systematized implementation of such mnemonic aids and to gain the perspectives and feedback of our community on how to develop them further within less personal projects.

Lépidoptères is a cycle of five works for recorder and electronics co-composed by Terri Hron and Monty Adkins in 2014–15. The project was conceived during Hron’s residency in the summer of 2014 at the studios of the Centre for Research in New Music at the University of Huddersfield. The consort of recorders used belongs together. They have a broad sound with strong upper partials; there are intimate, almost inexplicable sonic and physical connections between the different instruments. This inspired similar connections and interactions between the recorder(s) and electronics.

Recorder player and composer Terri Hron mostly makes music in collaboration with others. Currently she directs NESTING, a performance integrating musical, visual and embodied practices and Portrait Collection, an installation-performance project in which she translates the idea of portraiture to sound. Her collaborators include Monty Adkins, Luciane Cardassi, Katelyn Clark, Camille Hesketh, Paula Matthusen, Cléo Palacio-Quintin and Hildegard Westerkamp. She studies ways to encourage and notate collaborative practices in the creation of electroacoustic music for specific performers as well as how multimedia influences musicians in performance.
http://www.birdonawire.ca

Social bottom