Participants: |
Marlon Schumacher (author) Sean Ferguson (supervisor) Jean Bresson (supervisor) |
|
---|---|---|
Funding: |
NSERC/CCA Spatialization FQRNT CIRMMT (travel/exchange grants) |
|
License: | LGPL | |
Time Period: | 2009–present (ongoing) |
OMPrisma is a library for spatial sound synthesis in the computer-aided composition environment OpenMusic.
In addition to working with pre-existing sound sources (i.e. sound files) it permits the synthesis of sounds with complex spatial morphologies controlled by processes developed in OpenMusic in relation to other sound synthesis parameters and to the symbolic data of a compositional framework.
OMPrisma's system architecture separates authoring of spatial sound scenes from rendering and reproduction (see the ISASA2010 paper). This approach provides an abstraction layer which allows the rendering of alternative realizations of the same spatial sound scene description using different spatialization techniques and loudspeaker arrangements.
While OMPrisma 1.0 focused on compositional control of sweetspot-based spatialization techniques (such as VBAP or Ambisonics), OMPrisma 2.0 introduces new classes implementing spatialization techniques for non-centralized audiences (such as dbap, babo or ViMiC). This allows for reproduction setups with arbitrary loudspeakers placements, e.g. on stage, between the audience, for installations contexts, etc. The screenshot below, for example, shows a patch in which an OMPrisma class for Virtual Microphone Control (ViMiC) is used to simulate the spatialization technique employed in K.H. Stockhausen's “Kontakte” (1959/1960); using a rotational table with a mounted directional speaker (sound source) and four stationary microphones placed around it. A detailed description of this technique can be found in: Braasch, J., Peters, N., and Valente, D. L. (2008). A loudspeaker-based projection technique for spatial music applications using Virtual Microphone Control. Computer Music Journal, 32(3):55 – 71.
OMPrisma currently implements the following classes for spatial sound rendering.
All classes render perceptual distance cues for attenuation, air-absorption and doppler effect.
OMPrisma Class | Description | 2D/3D | sweet-spot | local/global | ICLDs | ICTDs | Room model |
---|---|---|---|---|---|---|---|
ambi | higher-order ambisonics | 3D | Y | global | X | ||
babo | ball-in-a-box | 3D | N | global | X | X | physical (resonator) |
dbap | distance-based amplitude panning | 3D | N | global | X | ||
panning | pan-pot (transfer functions) | 2D | Y | local | X | ||
rvbap | vector-base amplitude panning | 3D | Y | hybrid | X | signal (fdn) | |
spat | ambisonics | 3D | Y | global | X | geometric (source-image) | |
vbap | vector-base amplitude panning | 3D | Y | local | X | ||
vimic | virtual microphone control | 3D | N | global | X | X |
In addition to the authoring and rendering of spatial sound scenes, a third component of the OMPrisma framework is dedicated to aspects of reproduction (decoding, diffusion) – which often requires tweaking and adaptation for a given venue. For flexible adjustments in realtime we have developed a MaxMSP-based standalone application using the Jamoma framework: The “MultiPlayer” allows processing (rotation, transformation) and decoding of B-format/Higher-Order Ambisonics files, provides graphical tools for configuration of loudspeaker-arrangements (with automatic or manual compensation of time-delay and amplitude differences) and allows binaural reproduction/simulation via HRTF convolution with virtual loudspeaker positions.
OMPrisma has been used for the composition and realization of a number of works, most notably: