Real-time Spectral Analysis and Dispersion
[Full article published in eContact! 11.2 — Figures canadiennes II / Canadian Figures II]
Toronto Electroacoustic Symposium, Session 5: Systems and Techniques
Saturday 9 August, 13:00–15:00. Church of St.Andrew-by-the-Lake, Toronto Islands
View full schedule
A new approach to real-time extraction of spectral information from a given sound and the dispersion of its spectral components using the Max/MSP programming environment is described. This is being developed for use in realtime performance processing. Four aspects of this research are discussed, including freeze-frame sonification, the role of binextraction methods, spectral noise gating, and partials grouping and dispersion. The result of this work allows the user to analyze, manipulate, and resynthesize the initial sound while deciding which partials are reconstructed and controlling where they occur in the final sound field.
Using standard FFT procedures available in Max/MSP the user is able to complete a realtime analysis of any input sound, with the resulting phase deviation and amplitude information being stored in a buffer. The system allows the user to parse the buffer at a wide variety of speeds and in either direction in order to synthesize sound materials. Additionally, a non-real time freeze-frame function is available, in which the user “zooms” into a single frame, coming to a complete standstill in the buffer.
The freeze-frame ability is particularly interesting when coupled with the bin extraction feature, a feature that begins by sorting the indices resulting from the analysis according to magnitude. By using logical operators, the user is then able to request, re-synthesize and output the data of one or more bins, controlling the eventual reconstruction or even deconstruction of the initial sound. By omitting bins above a certain index (usually above bin number 60 in a 512 bin analysis) as well as by omitting overly prominent amplitudes (which may be potential mistakes in the analysis), a rudimentary spectral noise-gate can be constructed. This gate is useful for refining the quality of the resynthesized sound.
After bin extraction has occurred, bins are grouped and assigned to a channel. The groupings are decided by the user, but usually it is desirable for each channel to contain significant information, typically from the lower numbered bins. To this end, in an eightchannel environment bins 1, 9, 17, etc. can be allocated to channel 1; bins 2, 10, 18, etc. to channel 2, and so forth. In performance the dynamic and spatial output of each channel can be controlled automatically by the computer or manually by the user, resulting in the dispersion or focusing of the source sound.
Future work: Currently the freeze-fame function does not operate in real time. However, it is expected that by using several interlocking buffers it should be possible to achieve actual real-time input/output. Future work with the spectral noise-gate will deal with possible correlations between the sound source and significant bins.
Martin Ritter writes both electroacoustic and acoustic works and develops software tools in different languages. He has worked with performers Corey Hamm, Paolo Bortolussi, Ralph Markham, Kenneth Broadway, the Phoenix Chamber Choir, Vancouver Chamber Choir, and Standing Wave. He is active in the Sonic Boom festivals and is a founding member of the Composers’ Collective. His composition instructors include Mark Armanini, Stephen Chatman, Keith Hamel and Bob Pritchard, and he has become involved with the UBC Media And Graphics Interdisciplinary Centre (MAGIC) research group. Currently he is working on his Master of Music in Composition at UBC.
Paper originally presented at the Toronto Electroacoustic Symposium 2008, August 2008.