eC!

Social top

English

M2 Diffusion

The live diffusion of sound in space

This paper outlines some of the rapid changes taking place in electroacoustic music performance and introduces the M2 diffusion system, currently in use at The University of Sheffield Sound Studios (USSS). The paper focuses upon a process commonly known as “sound diffusion” and the performance of music usually played from CD or computer. The M2 system comprises bespoke software and hardware tools offering greater flexibility and improvisation in performance and new approaches to the musical composition of space. The paper speculates upon the future of the M2 system with SuperDiffuse software, and the new “composition opportunities” it has triggered.

Introduction

Sound Diffusion as Performance

The performance of electroacoustic music has for many years rested upon the diffusion of materials “fixed” to tape in the composition process. “Musique fixé sur support” was a term coined by Michel Chion in the early 90s and is used here to differentiate between music created prior to performance and compositional choices made “live”. Electroacoustic performance has in the past asked that the diffuser make real the spatial motion and structural relationships implied on the (for the most part) stereo tape. Changes in technology have allowed once expensive multichannel media to appear in the concert hall and in the home offering N (or N.1) fixed automation. Whilst many composers have utilised these multichannel avenues to enhance their “sound diffusion”, this paper reconsiders the practice of stereo sound diffusion, its validity as a performance art and its relationship to composition, especially at the moment where a composer commences the “fixing” process, by mixing in the studio.

Initial research has led to the creation of the M2 system with SuperDiffuse software which has become the primary tool for further practice-led performance research at The University of Sheffield Sound Studios (USSS). Through this research we hope to raise awareness of the need to integrate multiple sound diffusion performing interfaces that will cater for all users (from able-bodied musicians to those with minor or severe disabilities).

Previous Work in the Field

Lying at the heart of this research is a compositional necessity, the need to approach electroacoustic music on fixed media from another angle. Continued excellence in performance practice can be seen in the activity of the Groupe de Recherches Musicales in Paris, the Groupe de Musique Experimentale de Bourges and Birmingham ElectroAcoustic Sound Theatre (BEAST) to name but a few. Theoretical work is by contrast, somewhat underdeveloped; articles by MacDonald (1995), Harrison (1999), Clozier (1997) and Wyatt (1999) have begun to define practice-led research in electroacoustic performance.

Considerable amounts of money have been spent by the three institutions mentioned to create unique performance tools that satisfy two main aims: To use electronic devices to manipulate sound in space; To enable performers to interface with such devices.

All rely in some form or other upon the performer (often the composer) manipulating (usually) faders at a console. BEAST has recently demonstrated numerous methods of “hardwiring” other tools and alternate “mappings” to manipulate multichannel music over up to 80 loudspeakers. Certainly, we may impose our own trajectories on sound but we must also be aware of what sound is telling us as to its position and flight. Harrison naturally supposes that diffusion is thoroughly entwined in the composition process but categorically states that diffusion is not “the random throwing-around of sound which destroys the composer’s intentions” (Harrison 1999).

Recent performance history suggests a growing tendency to hold firm to the “acousmatic” spirit embodied in much electroacoustic music, which deprives the audience of their sight. Despite the introduction of a sound diffusion competition in Belgium, the status of the sound diffuser however remains that of engineer rather than artist.

Sound diffusion systems

The Acousmonium

In 1974, François Bayle created the Acousmonium at the Groupe de Recherches Musicales (GRM). The format adopted was an “orchestra of loudspeakers”, with all but a few speakers placed on the stage; a formidable sight indeed, presenting ghostly monolithic pillars of sound. The arrangement on the stage was based upon each speaker’s performing characteristic. Whilst the majority of the system was symmetrical, some “groups” penetrated this symmetry as “soloists”. It is clear that Bayle was influenced by the sonic characteristics of the loudspeakers on offer at the time: It is also interesting to note that whilst the studios at the GRM are state-of-the-art, the acousmonium has not incorporated more acoustically transparent loudspeakers.

The Cybernaphone

The Cybernaphone of the Groupe de Musique Experimentale de Bourges developed by Christian Clozier represents yet another approach to sound diffusion; it combines a symmetrical deployment of loudspeakers (in very specific groups) with computer aided diffusion. It is only suited to the diffusion of stereo works from CD or DAT and is very difficult to adapt to other formats.

BEAST — Birmingham ElectroAcoustic Sound Theatre

The BEAST system also uses many different loudspeakers often in a very symmetrical setup. Part of the process of installing a portable system (like BEAST) is understanding and correcting the space where a performance is taking place. The trained performer would know the frequency response of his loudspeakers and would use this to his advantage when positioning them in the concert space. In many respects a crude filtering of the original work has always taken place. A far cleaner solution would be to include filters and delays on output channels of a computerised spatialisation system. One may again require more acoustically transparent loudspeakers however.

Transparency in Performance

There were many that have insisted upon a more transparent solution by transferring the composition studio to the concert hall. This often implied purchasing monitor speakers for the concert space, sacrificing quantity for quality. Clearly this solution is neither visually striking nor does it lend itself to adaptation. This goal has much to commend it but the majority of stereo works conceived in the studio when taken to a concert hall equipped with a few loudspeakers fail to entertain those seated at awkward angles or in the rear of the hall. Further loudspeakers satisfy this need and immediately the composer must consider performance.

Denis Smalley has neatly defined many of the spatial characteristics that concern composers working with electroacoustic materials. He describes five categories that define space: spectral space, time as space, resonance, spatial articulation in composition and the transference of composed spatial articulation into the listening environment (Smalley 1986: 90). Expanding upon this fifth category Smalley writes:

… it is a question of adapting gesture and texture so that multi-level focus is possible for as many listeners as possible…In a medium which relies on the observation and discrimination of qualitative differences, where spectral criteria are so much the product of sound quality, the final act becomes the most crucial of all. (Smalley 1986, 92).

Aesthetic Considerations

A typical stereo performance will focus upon the manual control of sound, often with one fader of the diffusion console controlling one loudspeaker. The speed at which decisions can be implemented and the dexterity of control required when operating equipment clearly influences performance practice. Given the practicalities of very little rehearsal time in often inappropriate venues, this basic setup can be either extremely limiting or (in the case of a large system) highly intimidating.

During rehearsal, approximate trajectories may be mapped to a rough score. Whether the diffuser sticks to these in the heat of performance is another matter. The basis upon which a diffuser will articulate sound within different sections of a work is naturally suggested by the work itself. Performance currently tends to be part preparation and part improvisation through tools quickly learnt. It is the perceived lack of form within diffusion that needs further investigation through theory and practical work. It seems clear that the performer should have control over the spatialisation of sound and that he should be able to dictate the pace and style of the work itself during the performance process.

Copeland and Rolfe (2000) and others using the Richmond Sound Design Audiobox with ABControl software have attempted to marry composition and diffusion in real time. Without doubt the arrival of real-time laptop performance has had a serious detrimental effect upon sound diffusion. The performer (again often the designer of his/her own MSP patch) is often front-and-centre, facing the audience. Their improvisation can have little spatial articulation (if indeed spatialisation is part of the work) because they cannot hear it and the person situated at the desk can not anticipate it. Therefore, if sound diffusion is to work in tandem with the construction of the piece in real-time, the performer should be at the focal point of the sound.

Laptop music has however suggested a way in which a performer can dictate pace and style. Through dynamic control of manageable musical units (MMUs) a sense of the “here and now” is achieved, despite the protecting veil of the laptop screen and the highly uninspiring interface that is the computer keyboard and mouse. Many real-time improvisers use pre-composed textural passages either as background texture or as an input to subtractive computer processes. Due to the very nature of the work (improvisation) these passages are often generic in nature and can be used in multiple situations.

The feedback loop that is the concrete link between loudspeaker, composer and computer is at the heart of electroacoustic music composed in the studio and a realtime substitute is impossible. However a compromise can be found, and multichannel input-output spatialisation may have the answer. Just as composers such as Paul Koonce have acoustically modified versions of complete works on multichannel tape, so it will be possible to have multiple copies of phrases, each slightly different depending upon the performance, each requiring slightly different diffusion. The proportion of “performance time” required to put the piece together compared to “diffusion time” will present an interesting dilemma for the composer and will require both systems to be highly flexible.

Research at USSS has begun to focus upon this latter aspect of “diffusion time” and bears in mind the fact that the majority of electroacoustic music remains on stereo Compact Disc. The M2 diffusion system investigates how to relieve the burden of performance using large “one fader per loudspeaker” systems. It does so quite simply by modelling the lighting console. Many have intimated that all this is all possible within MSP yet few have made anything that is both stable, portable and easy to use.

The M2 system

The M2 system consists of two elements. The SuperDiffuse Client/Server Software running (via TCP/IP) on a PC that accepts and manipulates midi controller data and which outputs sound through ASIO soundcards, and a hardware mixer constructed at USSS for the performance of electroacoustic music based around the IRCAM Atomic box. The hardware may seem highly unoriginal. It consists of a series of modules containing 4 Alps faders on a PCB running to a mixing block which in turn outputs control voltages to the Atomic box. At first sight this is 32 MIDI faders but it is interesting to note that performers have found the “real faders” invaluable within minutes of using the console.

Figure 1
Figure 1. The console.

The mixer was designed with 32 faders in two rows of 16. This interfaces with the two 16-input blocks on the Atomic box. One of these inputs may be substituted for other interfaces as may each group of 4 faders within the system. Given the upsurge of “new interfaces” that offer performing opportunities we remained with the in-line fader paradigm for the following reasons: Space; Cost (both of construction and repair); and previous experience (of key users). The range of gestures available from non-standard controllers afford a greater freedom in performance and perhaps a more interesting visual stimulus but are contrary to our need to take up as little space as possible in the centre of the auditorium. Using the Atomic to convert cv to MIDI is a costly solution; a powered USB console is currently being designed.

Diversification towards “non-standard” controllers such as tilt switches, IR, flex strips and outputs from video cameras will enable adaptation of the system, not necessarily to provide more flexibility in performance or more “natural” mappings but to empower other performers with needs slightly different from our own. (Most ablebodied people involved with electronic audio will need to find something fairly convincing to overcome many years of experience with faders as a virtual manipulator). The Atomic box remains useful in this situation.

As the inputs to the software are MIDI, pre-composed spatialisation data enables more detailed control and is triggered alongside appropriate MMUs. Thus M2 and MSP are currently working in tandem at USSS in some trial compositions. The SuperDiffuse client software accepts the MIDI input and calculates any necessary matrixing + effects on a laptop PC. The results are sent via TCP/IP to an offstage server containing an ASIO card. Initial mapping of inputs to outputs is achieved in the matrix window (Fig. 2).

Figure 2
Figure 2. The Matrix window.

Once inside the matrix, virtual attenuators allow sound to be routed (globally or locally) to any output. Given the modularity of the design, further DSP can be placed at any point within this matrix. For example: output EQ.

The main window (Fig. 3) mirrors the hardware in front of the user with options for assignment and the meters display the output of that fader, whatever its assignment. Clearly, maintaining the knowledge of a fader’s role is required throughout the performance (especially where the role may change); this aspect of “difficulty” has already been noted and other indicators are under investigation.

Figure 3
Figure 3. The main window mirroring the hardware.

The standard “one fader per loudspeaker” can be modeled very quickly. However, it becomes quite easy to construct a group to be controlled proportionately by one fader. The level of control afforded in this system allows for accurate manipulation of sound and “compensatory” mapping. If one requires a large physical gesture to produce a relatively small effect (such as randomised panning over a number of outputs), a group can achieve this through proportional control over numerous effects. The software supports several basic but highly flexible effects such as chase, random and lfo-type additions to any parameter (groups, other effects, single channels etc.).

Future developments to the SuperDiffuse software include: Real-time feedback of remote matrix mixer status including: Actual level metering with peak/cliping actual status of the DSP system; Optimisation of DSP algorithm to exclude I/O channels not being used on large multichannel systems; Loudspeaker naming and a user built diagram to describe current concert speaker layout; I/O channel naming with the ability to “alias” a channel with a loudspeaker name, enabling the user to select an input or output by name or an output by connected loudspeaker name; Matrix DSP algorithms for delay and EQ. Real-time streaming audio and increased security: critical settings lock-down; automatic backup; password protection.

Figure 4
Figure 4. M2 in rehearsal. The console rests upon a flight case containing Motu24i/o hardwired to XLR patchbay, Atomic, Midi interface and rack-mount PC running both client and server.

Conclusion

It is clear that through fracturing the composition and diffusion processes we can generate positive musical results as well as offering the possibility of improvisation with materials. We may need to make the problem of diffusion more complex in order to find new solutions, even to current issues. Our aims continue to be: Without reinventing the wheel to raise the importance of electroacoustic performance by creating innovative and flexible methods of performing a large canon of electroacoustic music. We are well on the way to producing reliable and powerful tools to achieve flexible working methods; it may be difficult to determine how successful we might be in raising the importance of electroacoustic performance.

As we consider the complexities of electroacoustic performance and balance an aesthetic that demands both sonic complexity and accurate structure together with the need to improvise and perform, I suggest there is a strong case for continuing to compose sounds in an environment outside of real-time performance. I also believe there is room to fracture the electroacoustic work without destroying this paradigm. The M2 system currently affords tangible access to diffusion methodologies and as a consequence enables us to explore new composition and performance paradigms.

Bibliography

Chadabe, Joel. Electric Sound: The Past and Promise of Electronic Music. NJ: Prentice Hall, 1997.

Clozier, Christian. (1997). “Composition-diffusion / interprétation en musique électroacoustique.” Composition / Diffusion in Electroacoustic Music. Edited by Françoise Barrière and Gerald Bennett. Academie Bourges: Editions MNEMOSYNE, pp. 52–101.

Copeland, Darren and Chris Rolfe. “Sound Travels FAQ.” 2000. Archived at http://archive.groovy.net/soundtravels/stfaq.html

Harrison, Jonty. (1999). “Diffusion: Theories and practices, with particular reference to the BEAST system.” eContact! 2.4 — Diffusion multi-canal (1) / Multichannel diffusion (1) ([December] 1999). https://econtact.ca/Diffusion/Beast.htm

MacDonald, Alistair. (1995). “Performance Practice in the Presentation of Electroacoustic Music.” Computer Music Journal 19/4 (Winter 1995) “Sound Spatialization and Spatial Perception,” pp. 88–92.

Rolfe, Chris. (1999). “A Practical Guide to Diffusion.” eContact! 2.3 — Diffusion multi-canal (1) / Multichannel diffusion (1) ([December] 1999). https://econtact.ca/Diffusion/pracdiff.htm

Smalley, Denis. “Spectro-Morphology and Structuring Processes.” The Language of Electroacoustic Music. Edited by Simon Emmerson. London: MacMillan Press, 1986.

_____. (1996). “The Listening Imagination: Listening in the electroacoustic era.” Contemporary Music Review 13/2 (1996) “Computer Music in Context,” pp. 77–107.

Wyatt, Scott. (1999). Investigative Studies on Sound Diffusion/Projection. eContact! 2.4 — Diffusion multi-canal (1) / Multichannel diffusion (1) ([December] 1999). https://econtact.ca/Diffusion/Investigative.htm

Social bottom