eC!

Social top

English

A Review and an Extension of Pseudo-Stereo for Multichannel Electroacoustic Compositions: Simple DIY ideas

GAUS (Groupe d’Acoustique de l’Université de Sherbrooke), CIRMMT (Centre for Interdisciplinary Research in Music, Media, and Technology)

In this tutorial, pseudo-stereo techniques used to give “volume” to a monophonic sound over conventional two-channel stereophonic systems are reviewed and then conceptually extended to multichannel systems for electroacoustic music composition. The applications are illustrated as Pure-Data Do-It-Yourself examples. In a time where most of the electroacoustic, acousmatic or experimental composers and artists are involved (conceptually or practically) with complex technologies, it is somewhat important for them to know how to hack commercial technologies. This motivates such tutorial. Not that I think that the artists should become technicians or technologists, but I just suppose that if complex electroacoustic systems are the new instruments of a growing artistic population, the “instrumentists” and users should understand some technical basis to, while creatively working, go beyond the technically or commercially defined horizon of possibilities. This is some sort of a defense mechanism against a dominating technological determinism, surely more present in general culture. In this trend, care must however be taken to prevent confusion between artistic goals and a fascination for the technical means. The tutorial is divided as follows: 1) Monophonic and stereophonic sound diffusion (with historical notes; plus relations with the loudspeaker as a discrete source in a multichannel context); 2) pseudo-stereo techniques with examples; 3) simple multichannel pseudo-stereo; 4) conclusion (including conceptual considerations) and 5) references.

Monophonic and Stereophonic Sounds

The very beginning of sound recording technologies represents an abrupt discontinuity in the spatial auditory experience. (This discontinuity, although already experienced in various forms in other fields of auditory techniques and cultures (Sterne, 2003) is considered here from a recording view point.) Before monophonic sound recording, any auditory experience was indeed naturally spatial. With the advent of sound recording, the recorded sound events are spatially distorted through monophonic recording: every spatial characteristic of sound events get through a tight pipe; that is the monophonic signal. This distortion characterizes the lost of spatial audition qualities (in comparison with the recorded events) in the realm of a newly elaborated medium.

These discontinuities and distortions were mostly remarquable in popular culture of sound recording which was tightly related to what was disponible, pushed by companies, corresponding adds and popular images of the “new sound medium” usages and latent social meanings (Sterne, 2003). By marked contrast, right at the onset of the historically official musical exploration of sound recording as a musical medium or instrument, space was considered as a potentially relevant dimension of “musique concrète” (Poulin, 1957; Schaeffer, 1957; Tardieu 1957; Moles, 1960) (1) . As one surely know, current spatial electroacoustic music strongly depends on technical means, but also on technical knowledge and originalty.

With this in mind, we look, in this tutorial, at potential techniques for enlarging monophonic sound recording using pseudo-stereo well-known approaches which surely characterizes many expensive and recent audio plugins. The tutorial is divided as follows: 1) Monophonic and stereophonic sound diffusion (with historical notes; plus relations with the loudspeaker as a discrete source in a multichannel context); 2) pseudo-stereo techniques with examples; 3) simple multichannel pseudo-stereo; 4) conclusion (including conceptual considerations) and 5) references.

Monophonic and Stereophonic Sounds

Monophonic sound was obviously a technical starting point for sound recording. At that time, the sound-media chain including sound recording (sound sensor), sound recording (or transmission) and sound reproduction (loudspeaker) has to be tested, evaluated and developped before even imagining the interests and uses for multichannel sounds and corresponding stereophonic effects. So, although an historical discontinuity was introduced within the spatial auditory experience: from natural hearing (without any electroacoustic systems) to artificial monophonic hearing (with an electroacoustic system), this was caused by technical constraints and not by a mere voluntary move imposed by the technological developpers: this was a necessary rupture. (A move that however proved fundamentally useful for the creators of “musique concrète” since invention of sound recording introduced the sound object: the material inscription of sound (Moles, 1957).)

The British engineer A.D. Blumlein had introduced most of the basis of stereophonic (2) sound over two-channel systems in the thirties of the past century (Blumlein, 1933; Rumsey, 2001). Blumlein, through patented ideas, shown that it is possible – using amplitude differences on a pair of loudspeakers – to create phase differences between the ears that can correspond to a natural hearing situation. This was the beginning of level-difference stereo using coincident microphone pairs and corresponding panning laws. Phantom images on “surround” or multichannel systems are, today, achieved using level-difference stereo. In electroacoustic or acousmatic music, level-difference stereo is predominantly used (3) since most of the mixer panning laws (panpot) are based on constant acoustical power distribution which only changes the relative amplitudes of right and left versions of an originally monophonic signal (Eargle, 1986). With this in mind, level-difference stereo was easily extended to multichannel systems. Most of the today's multichannel systems use level-difference to place phantom images between the loudspeakers: this is again stereophony as Blumlein defined it in the thirties. (For a complete review and history of stereophonic sound, see AES (1986).)

Multichannel Works with the Loudspeaker as a Discrete Source

If you are working with multichannel systems with the idea of using the loudspeakers as discrete sources of sound, you will surely go ahead with monophonic signals: one signal for one loudspeaker used as a discrete source. And here, precisely, it is possible to request pseudo-stereo to play with the “spatial dimension” (size or volume) of this monophonic sound. A simple exemple would be to rapidly or slowly make exploding in space the monophonic signal by turning it into a big (spatially extended, not simply louder) and spatially complex sound source. In my view, such modulation would not introduce very intense and marked effects (sometime a dangerous attraction for the composer) of a fast rotating source around an audience. On the contrary, pseudo-stereo adds something more subtle to the auditory artificial electroacoustic experience: i.e. more tighlty related to the natural spatial hearing condition that we experience in our everyday life while being confronted with real spatial sounds which are not always frenetically moving in space. This is a matter of perspective and space arrangements more than extreme sound kinetics. Pseudo-stereo, pushed to the extreme, could however be used for highly dynamics creations.

Also, within the context of “the loudspeaker as a discrete source” paradigm, pseudo-stereo is an exit from the physical constraint of the loudspeakers. Indeed, one must be aware that when using the loudspeaker as a discrete source (like for the older type of loudspeaker concerts (Bayle, 1993; Clozier, 2001)) the individual spatial character of the monophonic sound sent to the given loudspeaker is then exclusively defined by the loudspeaker acoustical radiation (directivity as function of frequency, how it excites the acoustical response of the room, etc.). In a time where most of the electroacoustic concerts are made over L surrounding loudspeakers of the same brand and model, it is obvious that the spatial character of each discrete source will be same except for the auditory events positions in relation to the listener position. So, as shown in the following DIY ideas, pseudo-stereo can be an exit from the determined and uniform character of the reproduction sources by oddly distributing the monophonic recording to more than one loudspeaker and create a more complex, singular, sound source in space.

Also, one notes that pseudo-stereo for multichannel compositions is somewhat of a temporary state while waiting for the new technologies of synthetic directivity sources. Such type of synthetic directivity loudspeaker includes a set of acoustical sources and, by multichannel control (within each loudspeaker cabinet), will be able to create various directivity pattern up to 20kHz (including omnidirectional radiation at 20kHz) emerging from a discrete source. In this case, the loudspeaker as a discrete source within a multichannel creation context will take most of its power since the spatial character of each reproduction source will be individually controlable. Synthetic directivity sources is a topic of research for different laboratories and institutes such as IRCAM, GAUS and CIRMMT.

Pseudo-Stereo

Let first put forward a technical definition of pseudo-stereo. Pseudo-stereo is simply whatever you might imagine, or accidently discover, that enlarge the perceived spatial extent of a monophonic sound in comparison with the simple display (i.e. every channel is fed by this same monophonic signal) of this monophonic signal over one, two ore more loudspeakers. A nice and simple definition of pseudo-stereo has been introduced by Schroeder (1958) in his paper title: “An artificial stereophonic effect obtained from a single audio signal”.

We now proceed in classical pseudo-stereo techniques for two-channel systems. The first class of pseudo-stereo approaches is based on the space-dependent distribution of the frequency content of a monophonic sound. The second class of methods, less common and less effective, relies on space-dependent distribution of very short time-delays (we are not speaking of echoes) and phase distortions of the monophonic signal.

Spatial Distribution of Frequencies

For the two-channel situation, the simplest realization of pseudo-stereo is shown in Fig. 1. In the Pure-Data realization (left in Fig. 1), the [inlet~] object represents and incomming monophonic signal and the [outlet~] objects represent the resulting Pseudo-Stereo (PS) signals. The signal flows from top to bottom. The left PS signal is the sum [+~] of the mono signal plus a one-sample (samples like in “44100 samples per second on CD”) delayed version of this mono signal. The right PS signal is the difference [-~] between the mono signal and the one-sample delayed version. These are very simple means of creating equalization filters. Taking a look at the corresponding spectrums (right in Fig. 1), we see that for an original monophonic noise (white noise: flat spectrum, i.e. all the frequencies are uniformly present), the PS signals are simple low-pass and high-pass filtered noises. The left PS signal contains most of the low frequencies while the right one contains the remaining high frequencies. Since the slopes of the spectrums are pretty smooth, there is no abrupt separation of the original frequency content. This example was graphically illustrated with white noise, but any other signal would take advantage of this PS effect.

Listening to the monophonic sound equally distributed over two loudspeakers and comparing it with the PS signals sent to the left and right loudspeakers should suffice to demonstrate the usefullness of pseudo-stereo. In both cases, the auditory event would be localized between the loudspeakers, but the PS version of the monophonic signal would be occupying a much broader place in auditory space.

This preliminary example summarizes the important points of pseudo-stereo. Most impressive and efficient pseudo-stereo is obtained with the spatial distribution of frequencies. In Fig. 1, this was obtained by sending the low frequencies to the left channel and the high frequencies to the right channel with a smooth transition. The effect might be explained as follows. In presence of a big natural sound source, most of the acoustical radiation is space dependent: some frequencies radiate in a given direction, while others frequencies in other directions with a different intensity, etc. Pseudo-stereo is a technical electroacoustic approximation of this natural reality.

Figure 1
Fig. 1. A simple pseudo-stereo operation using different frequency distributions over two loudspeakers. Left: Pure-Data realization. Right: Resulting frequency spectrums.

Although the previous example posed the basic ideas of any PS system based on the spatial distribution of the frequency content of a monophonic signal, I give some other examples and variations of this realization. In Fig. 2 (left part), one can see the frequency spectrums of the Pure-Data realization (Fig. 1) but with a two-sample delay ([z~ 2] replacing [z~ 1]). This is a simple comb filter. As shown by Fig. 2 (left part), the frequency distribution is more “sharp”: The left PS signal gets the bass and the high while the right PS gets the middle frequencies. This filtering is more drastic. To smooth the dips and notches in the resulting frequency spectrums, it is possible to reduce the amplitude of the delayed signal before adding or subtracting it from the original monophonic signal. An example of resulting spectrums is shown on right in Fig. 2. In this situation, a half gain object [*~ 0.5] (multiplication of the audio signal by ½) has been used right after the two-sample delay [z~ 2]. This produces a less intense filtering which, in some cases, reduces the transformation of the original signal. A typical example is speech which sometimes gives artificial results with filters like those who produced the spectrums on left in Fig. 2.

Figure 2
Fig. 2. Frequency spectrums for the Pure-Data realization shown in Fig. 1. Left: The unit-sample delay (Fig. 1) has been replaced by two-sample delay. Right: Same realization with the delayed signal divided by two (half amplitude) to smooth the frequency dips.

A more officially known method which belongs to this type of PS method using spatial distribution of frequency is the Lauridsen's method (Eargle, 1986; Schroeder, 1958) shown in Fig. 3. This method, using two complementary comb filters (a generalization of the simple Fig. 1 example), seems to be the first historical advent of pseudo-stereo (4). The interest with comb filters is that the number of dips in the PS signal spectrums is controlled by the filter delay and the deepness of the dips is controlled by the filter gains. More various equalization filters can thus be easily achieved with comb filters than with the realization of Fig. 1.

Figure 3
Fig. 3. Pseudo-stereo using Lauridsen’s method. Left: Pure-Data realization. Right: Resulting frequency spectrums.

Now that you know the basic behavior of pseudo-stereo using spatial distribution of the frequencies of an original monophonic signal over two loudspeakers, you can apply it by any means or crossfade it with the original monophonic sound. A new example is shown in Fig. 4 using a set of band-pass filters. The example using white noise gives a very soft and spatially diffuse drone in comparison with the equivalent monophonic signal. With this example, the frequency distribution is concentrated below 1600 Hz and the difference between the PS signals is less intense (2.5dB@500Hz, 6dB@1000Hz, 2.5dB@1500kHz).

Figure 4
Fig. 4. Pseudo-stereo using band-pass filter outputs exclusively allocated to the two-channel stereophonic output. Left: Pure-Data realization. Right: Resulting frequency spectrums.

The last example of PS using spatial distribution of frequency over two loudspeakers is shown in Fig. 5. This time, the frequency separation is achieved with simple resonant filters for which the cut-off frequencies and quality factors (“Q” or “resonance” in most commercial audio filters) can be controlled independently. This example is interesting and instructive: it shows that if the filters produce too different PS spectrums, the auditory event might segregate (see Bregman (1999) about segregation) in two distinctive events.

Of course, any filtering (parametric equalizers, parallel or serial resonant filters, etc.) can be used to create two PS signals by spatial distribution of frequency content. The important point is to create two signals that share the same informational contents but that differs in their spectrum equalization at each loudspeaker in a two-channel setup.

Figure 5
Fig. 5. Pseudo-stereo using two different resonant filters (left channel resonance at 660Hz, right channel resonance at 1340Hz). Left: Pure-Data realization. Right: Resulting frequency spectrums.

The last examples also bring the attention towards the complementary nature of the left and right pseudo-stereo filters. By this, one simply notes that (in case of Figs. 1 to 3) if a frequency is predominantly present in the left PS channel it is reduced in amplitude in the right channel (and vice versa). If this would not be the case, the resulting PS process would not simply change the spatial character of the signal but also its frequency contents and a notable equalization would potentially become audible. This will be of considerable importance in the multichannel applications of PS.

Spatial Distribution of Time Delays and Phase Distortions

If the spatial distribution of frequency characterizes the most intense pseudo-stereo effects (Schroeder, 1958), some descriptions of and comments on the spatial distribution of time delays and phase distributions over two-channel stereophonic systems are briefly introduced. Few words will be spent on this because the resulting effects are less remarkable than those described in the previous subsection. The first historical advent of PS by Lauridsen was in fact including a combination of spatial distribution of frequency and phase.

To clarify the point on spatial distribution of time delays and phase distortions, two technical definitions are needed. Within a Digital Signal Processing (DSP) context, when one speaks of delays and phase, he’s not talking of the more musical conception of delays (that it perceptible echoes) and phase (like with a “phaser” effect). In DSP, a useful time delay is usually in the range of the sampling rate. This type of delay was used in Fig. 1 and 2 when the signal was delayed by one sample (1/44100 = 0,0000227 second with a sampling rate of 44100 Hz) or two samples. We are again talking of such short time delays. The phase of a signal is characterized by a time delay (for a given frequency) expressed in part of the frequency cycle relative to a given original time. If this is not clear, simply consider that phase distortions represent short time delays (in comparison with the period (5) of the frequencies) which change as function of frequency.

A common way to change the phases of a signal without changing its frequency content is with all-pass filters. Such filters let all the frequencies pass but distorts the phase of the signal as function of frequency. All-pass filters are so common that most software like Pure-Data, Max and CSound include all-pass objects. A Pure-Data realization is shown in Fig. 6 with an [allpass~] object. The phase of the left (unchanged) and right (all-pass filtered) signals are also shown in Fig. 6. Clearly the phase as function of frequency is distorted in comparison with the original phase of the left PS signal. The phase spectrums of the PS signals are expressed in radians. To clarify the signification of radians: a phase shift of p (pi) = 3.1416 radians (or n pi with n odd, marked by x in Fig. 6) is corresponding to a signal inversion (positive getting negative and vice versa) while a phase shift of 2 p (or n pi with n even, marked by + in Fig. 6) simply gives a time delay of one period. Playing these PS signals over a two-channel setup would produce a space-dependent phase distortion of the original monophonic sound. The PS signals obtained with phase distortions create a very subtle effect. The resulting spatial impression is approaching a simple phase inversion (reversed ports) of a channel in comparison with the original signal. For this reason, PS using phase distortions (very short time delays) is less common. However, extreme values of delays and gains of the all-pass object is a possible creative avenue for PS.

Figure 6
Fig. 6. Pseudo-stereo using an all-pass filter for phase distortion. Left: Pure-Data realization. Right: Resulting phase spectrums.

Multichannel Pseudo-Stereo

All the previous examples and descriptions have really defined everything about PS. Extending it to multichannel is simply a matter of extrapolation, deduction and little innovative thinking. For this reason, a sole example will be described.

Fig. 7 shows this multichannel example for an hypothetical frontal three-channel setup. This is a very simple extension of Fig. 3 but an additionnal comb filter has been included. Also note that this new comb filter uses a different time delay (0.123 millisecond versus 0.1 millisecond) to create a different filter (dips at different frequencies) so that all the three PS signals have a different frequency distribution in space. That is a summarizing example of pseudo-stereo for multichannel system.

Figure 7
Fig. 7. Three-channel pseudo-stereo with comb filters. Left: Pure-Data realization. Right: Resulting frequency spectrums.

In the previous section, it has been suggested that the complementary responses of PS filters is an important issue. To clarify this point, Fig. 8 shows the spectrums of the the summation of the two-channel PS signals of Fig. 3 and the summation of the three-channel PS signals of Fig. 7 (6). With this figure, there is a piece of evidence that the 3 PS signals of Fig. 7 are not complementary since they create a global equalization of the signals: two major notchs in the resulting spectrum (see Fig. 8).

This was foreseable from Fig. 7 where one can see that, around 5 kHz, channel 2 and 3 boost the signal: they are not complementary. Again from Fig. 7, it seemed that between 7.5 kHz and 17.5 kHz, the three PS signals are however somewhat complementary: channel 1 boost the signal around 10 kHz, channel 2 around 12.5 kHz and channel 3 around 15 kHz. Fig. 8 shows that this is not exactly the case. In comparison with the two-channel case in Fig. 8, it is clear that the three-channel case (again in Fig. 8) suffers more severly from the non-complementary nature of the comb filters shown in Fig. 7. However, such frequency spectrum transformation by PS is not so audible when the signals are played over more than one loudspeaker. For the most extreme cases, some equalization (using conventional parametric equalizer before the pseudo-stereo filters) can be included to compensate for potentially indesirable coloration of PS in multichannel situation.

Figure 8
Fig. 8. Frequency spectrums for the addition of the 2-channel PS signals of Fig. 3 and for the addition of the 3-channel PS signals of Fig. 7. Gray line: PS spectrums from Fig. 7.

So generally speaking, comb filters are not perfectly suited for multichannel pseudo-stereo (they do not easily (that is without any a priori computation) provide complementary filters for more than two channels); but they provide a very simple way to create PS filters for multichannel systems.

Again, any other type of filtering (bank of band-pass filters, parametric equalizers, etc.) to create different multichannel signals could be used.

Multichannel pseudo-stereo with phase distortions is not considered here since it can easily be extended from the previous sections and also because it is less efficient for creating a stereo broadenning effect from a monophonic signal.

Creative Experiments with Pseudo-Stereo

The best way to experiment with pseudo-stereo is to first test it with broadband noise and innovate with your own PS architectures or variations of the presented examples. Next, try it with various monophonic materials (voices, instruments, grains, synthesis, etc.) and parameters values (filter shapes, comb filter delays and gains, etc.), you will soon sense that for some extreme cases, pseudo-stereo methods (exposed in this tutorial) will give various effects – convincing or not – depending on the type of processed sound material.

As an example of sound-dependent PS, let consider a band-limited (20Hz to 10kHz) sound which get through the two-channel PS filters shown in Fig. 1. Since the PS separation of these filters is around 10kHz (0~10kHz to the left and 10kHz~20kHz to the right), this band-limited signal will simply be pushed to the left and nearly nothing will be sent to the right loudspeaker. In this case, the frequency distributions shown in Figs 2 and 3 might be more appropriate to get a real PS effect over the band 20Hz~10kHz. Even with broadband noise (20Hz to 20kHz), a filtering like the one shown in Fig. 1 might pull the sound to the left since most of us are less sensitive to sound from 2kHz up to 16kHz or 20kHz (depending on age and ear damage) (Zwicker et al., 1999) so that the right signal shown in Fig. 1 might need a little amplitude boost to get its full efficiency. Hence, although not included in any PS patches presented in this tuturial, simple gain control might be needed for manual adjustments before sending each PS signals to the loudspeakers.

Although not presented in this tutorial, pseudo-stereo might not be limited to equalization and filters. Using non-linear distortions, Amplitudes Modulations (AM), Frequency Modulations (FM) (including micro-modulations) or ring modulations all differently defined for each loudspeaker can give very interesting results for spatial sensations (see Bregman (1999) and Blauert (1999) for some sources of inspiration) (7).

I left you here with an open field of possible experimentations and aesthetically promising creations using pseudo-stereo and variations of pseudo-stereo over multichannel systems. There is no secret to reach a fruitful sensitive approach of spatial sound and music: creative experiments as a learning and expressive process along with accute listening sensibility. (In my view, the experimental nature of sound art creative process has been well discussed by Moles (1957) (although, I do not share every details of his ideas).)

Conclusion

This tutorial was devoted to simple practical applications of pseudo-stereo techniques that can be extendend to multichannel systems using open source software like Pure-Data. Briefly summarized, it have been suggested that it is possible to give “volume” to a monophonic record using pseudo-stereo techniques. These techniques share a common point: that is transforming the monophonic signal in L different signals (using EQ, filters, phase distortions and time delays) which are then sent to the L loudspeakers of a given multichannel system. By doing this, the resulting multichannel sound radiation is more like a natural sound source extended in space: a real sound source does radiate sound with different character (frequency content, delays) as function of space.

Multidisciplinary?

As pointed right at the onset of this tutorial, the underlying objectives of a technical tutorial like this one for electroacoustic and acousmatic artists or composers is not a mere illustration of the pressures and influences from technical domain over the artistic domain. Moles’ text on musical machines and David’s description of cybernetics highlight a tricky confusion than often happens while working with machines and technology in arts: a confusion between the artistic intentions (aesthetics, sensory and intellectual communication) and the technical means (Moles, 1957; David, 1965; see also Berdiaeff, 1933). In my view, the cybernetics description of an engineer (David, 1965, “Chapitre préliminaire”) as someone who loses the distinctions between intentions (a social project) and means summarizes many current tendencies, attitudes and culture; even in the arts. That is the paradigm of “it is possible, so let's do it”. Care must be taken on this matter. (A concise review of technique in arts has also been recently published by Pasquier (2005).)

This tutorial was formulated as an example of knowledge exchange that I would like to encounter more often in real life and in multidisciplinary informal exchanges (that is outside the granted research groups). The main objective of the tutorial was to leave something that composers and artists can easily absorb and use for their own artistic researches.

Spatial Music, Virtual Reality, Conceptual Art and Immersion

The tutorial was also influenced by considerations about techniques and technologies in arts. Facing the technologies from the artistic or humanist domain, the real problem is not the technology (and technological evolution) in itself. The problem comes from the human position and human indifference while being faced with the strong technological field over the social and human domain (this has been nicely described by Schaeffer (1957) in his letter to A. Richard). Most of these ideas should be kept in mind while playing with technique in creation.

In this context, I think that the artists should actively take position within our constantly evolving technological era. This is not only using technologies but also investigating or criticizing the corresponding industrial and commercial cultures building around our technological world dominated by engineering and commercial purposes (see Dutton (2005) about business and arts). My favorite example of topic is virtual reality and virtual art (Grau, 2003). If some artists investigate spatial audio, is it possible to do it without thinking to what spatial audio belong: virtual reality and augmented multimedia systems? As virtual reality and multimedia strongly influence and literally change popular culture, is it possible to use spatial audio without a conceptual art approach? Would you accept to make a composition with the most powerful and flexible (it understands and produces everything you can think of) synthesis/electroacoustic machine if it would also be the most pollutating fuel combusting device ever made by humans? Of course, this is a caricature of the real problem, but I think that it is somehow representative.

In my view, spatial sound is not only a conceptual problem from the music (more or less traditionnal) view point since it also strongly belongs to the world of audio artists or virtual art creators. On this matter, conceptual and historical conceptions posed by Grau (2003) in its book “Virtual Art” about immersive images could serve as an immense and intense source of inspiration for future debates and conceptual considerations about spatial sound and spatial music. As pointed by the author in the introduction of its book, virtual reality (in which one finds spatial audio) is not only a matter of reproducing a given real space (mimetic) but mostly a matter of immersion: that is reducing the critical distance between what is shown and the observer by working on the sensation of presence. A spatial music displayed over a multichannel sound system (or a binaural system) will not necesserly be profundly different (easthetically or conceptually: we hear the same informational content) from a monophonic version of the work, but the spatial multichannel version of the composition might reduces this critical distance and thus not only modulates the musical composition in itself, but more presicely changes the sensorial relation with the composition. And here, spatial sound get conceptually linked to virtual reality.

References

The following references present some interesting readings on the subject and support the present tutorial. Writings on multichannel music projection have been recently published by www.econtact.ca (7.2 and 7.4). Some references are given in relation with pseudo-stereo while some others are introduced as suggested readings on art and technology.

AES. Stereophonic Techniques: An Anthology of Reprinted Articles on Stereophonic Techniques. Audio Engineering Society, 1986.

AES Staff Writer. ‘Multichannel Audio Systems and Techniques’, in Journal of the AES, vol.53, no.4, pp.329–335. 2005.

Bayle, François. Musique acousmatique, propositions… positions. Paris: Buchet/Chastel—INA-GRM (ed.), 1993.

Berdiaeff, N. L’homme et la machine. Éditions « je sers », 1933.

Blauert, Jens. Spatial Hearing: The psychophysics of human sound localization. Cambridge, MA: MIT Press, 1999.

Blumlein A.D. ‘Improvement in and Relating to Sound-Transmission, Sound-Recording and Sound-Reproducing Systems’, in British Patent Specification 394,325, 1933.

Bregman, Albert S. Auditory Scene Analysis: The perceptual organization of sound. Cambridge, MA: MIT Press, 1999.

Clozier, Christian. ‘The Gmebaphone Concept and the Cybernéphone Instrument’, in Computer Music Journal, Vol. 25:4, pp. 81–90. 2001.

David, A. La cybernétique et l’humain. Paris: Gallimard, 1965.

Davis, M.F. ‘History of Spatial Coding’, in Journal of the AES, vol.51, no.6, pp.554-569. 2003.

Delalande, François. Le Son des Musiques — Entre Technologie et Esthétique. Paris: Buchet/Castel, 2001.

Dutton, Paul. ‘widgets über alles — bizspeak and the arts’, in MusicWorks, no.92. 2005.

Eargle, J. Handbook of Recording Engineering. Van Nostrand Reinhold Company, 1986.

Grau, O. Virtual Art — From illusion to immersion. MIT Press, 2003.

Kahn, Douglas. Noise, Water, Meat: A History of Sound in the Arts. The MIT Press, 1999.

Kahn, Douglas and G. Whitehead. Wireless Imagination — Sound, radio and the avant-garde. MIT Press, 1992.

Moles, A.A. Les musiques expérimentales. Éditions du cercle d’art contemporain, 1960.

Moles A. ‘Machines à musique — L’apport des machines électroniques et électro-acoustiques à la nouvelle sensibilité musicale’, in Vers une musique expérimentale, La revue musicale, no.236, pp.115–127. 1957.

Pasquier, P. ‘A Reflection on Artificial Intelligence and Contemporary Creation, the Question of Technique’, in Parachute, no.119, pp.152–167. 2005.

Poulin, J. ‘Son et espace’, in Vers une musique expérimentale, La revue musicale, no.236, pp.105–114. 1957.

Rumsey, Francis. Spatial Audio. Focal Press, 2001.

Schaeffer, Pierre. À la recherche d’une musique concrète. Paris: Éditions du Seuil, 1952

_____. ‘Lettre à Albert Richard’, in Vers une musique expérimentale, La revue musicale, no.236, pp.III–XVI. 1957.

_____. ‘Vers une musique expérimentale’, in Vers une musique expérimentale, La revue musicale, no.236, pp.11–27. 1957.

Schroeder, M.R. ‘An Artificial Stereophonic Effect Obtained from a Single Audio Signal’, in Journal of the AES, vol.6, no.2, pp.74–79. 1958.

Simondon, G. Du mode d’existence des objets techniques. Éditions Aubier, 1989.

Sterne, J. The Audible Past — Cultural origins of sound recording. Duke University Press, 2003.

Streicher, R. and W. Dooley. ‘Basic Stereo Microphone Perspectives – A Review’, in Journal of the AES, vol.33, no.7/8, pp.548–556. 1985.

Tardieu J. ‘Décade de musique expérimentale — Présentation de la 9e séance’, in Vers une musique expérimentale, La revue musicale, no.236, pp.103–104. 1957.

Zwicker, E., and H. Fastl. Psycho-acoustics — Facts and models. Springer, 1999.

Other Publications by the Author

Print

Berry, A. and Pierre-Aubert Gauthier, ‘Adaptive Wave Field Synthesis with Independent Radiation Mode Control for Active Sound Field Reproduction, accepted abstract for Proceedings of Active 2006, Adelaide, Australia.

Gauthier, Pierre-Aubert and A. Berry, ‘Adaptive Wave Field Synthesis with Independent Radiation Mode Control for Active Sound Field Reproduction: Theory’, in Journal of the Acoustical Society of America, vol. 119:5, pp. 2721–2737, 2006.

Gauthier, Pierre-Aubert, A. Berry and W. Woszczyk. ‘Creative Sound Projection using Loudspeakers Arrays’, in MusicWorks 93, pp.14–16, 2005.

_____. ‘Sound Field Reproduction In-room using Optimal Control Techniques: Simulations in the frequency domain’, in Journal of the Acoustical Society of America, vol. 117:2, pp. 662-678, 2005.

_____. ‘Sound Field Reproduction using Active Control Techniques: Simulations in the frequency domain’, invited contribution for Proceedings of the 18th International Congress on Acoustics, Kyoto, Japan, 2004.

Electronic

Gauthier, Pierre-Aubert. ‘Sound Reproduction using Multi Loudspeakers Systems’, in eContact! 7.2.

Gauthier, Pierre-Aubert, A. Berry and W. Woszczyk. ‘An Introduction to the Foundations, the Technologies and the Potential Applications of Acoustic Field Synthesis for Audio Spatialization on Loudspeaker Arrays’, in eContact! 7.2.


  1. Suffice it here to mention that I simplify any historical debates and details by assuming that Schaeffer’s (and contributors) research for a “musique concrète” is the milestone of the first artistic experiments with sound recording. This is detailed by Kahn’s discussions on pre-Schaefferian experiments with and ideas of sound recording in the arts (Kahn, 1999). However, the “musique concrète” beginning has been characterized by a bunch of writings, theory drafts and an international meeting of experimental music (Schaeffer, 1957) which may be crudely considered as the first historical and theoretical inscription of sound recording in arts. Also, the simplification is admissible because I think that it characterizes the general feelings about space in the early creative uses of sound recording.
  2. In this tutorial, stereophonic sound is not limited to two-channel systems. Stereophony describes a system with any number of channels as long as the spatial audio method belong to level-difference or time-difference between channels to create phantom images between the loudspeakers.
  3. For the purpose of this tutorial on pseudo-stereo, I neglect any stereo field recording in electroacoustic music.
  4. The original Lauridsen’s method includes a combination of spatial distribution of frequency and phase of an original monophonic signal (Schroeder, 1958). After the Lauridsen's description of the method, these types of distribution (frequency and phase) have been divided in two classes for futher investigations and developments.
  5. The period is the time duration of a complete cycle of a tone. A complete tone cycle is characterized by the signal going up to maximum then going down in the negative range of the signal and then getting up back to zero. The period is inverse of the frequency in Hz. Examples: 20Hz gives 1/20 = 0.05 second; 1kHz give 1/1000 = 1 millisecond; 2kHz gives 1/20000 = 0.05 millisecond.
  6. This summation is a very severe approximation of what can reach the listener ears. With a real sound projection system, these PS signals will be transformed (filtering, phase changes, time delays, etc.) by sound propagation in the listening space. Fig. 8 must in fact be seen as an illustrative shortcut to illustrate what can result from PS.
  7. Such approaches would however not correspond to the stringent technical definition of pseudo-stereo.

Social bottom