This is the second iteration of the spatialised wave modulation experiment, this time furthering the variables and applying room simulation. The additions focused primarily around automating the rotational movement of the sound sources on various axes.  
The first amendment that was made to the system was an improvement to the sawtooth generation, as I realised that rather than implementing a half baked Fourier series, I could simply use a linear line equation to calculate the sample values. This method is somewhat restrictive, but for the purposes of this experiment it produced the result required. 
The next step was to attach the sound sources to a parent object that could then move all sources in relation to one another. The script that modulated the rotation of the parent object used range and speed variables, measured in degrees and hertz respectively, to test the effect at different velocities. The axis of rotation could also be changed to test the effect beyond the azimuth plane of the Google Resonance plugin. 
A reverb zone, part of the Resonance plugin, was also added. The dimensions and materials of each surface can be changed, both of which were tested with some varying degrees of success. 
The modulating effect on the rotation of the source parent provided an effect that was somewhat to be expected. The frequencies panning around the stereo field were still distinct as their own sources while modulating each other, as was experienced in the previous experiment. With the doppler effect active on the sound sources, the movement introduced some pitch variation which was an interesting effect, however did not provide much promise of musical or even sound design utility. 
When the reverb zone was introduced, the modulations between the frequencies were accentuated as the reflections introduced an elongation in the amplitude tail, subsequently resulting in the superposition of the sound waves. I would have expected much more of a phasing effect in reality due to the time delay of the reflected sound waves, so the result was not a pronounced as I had hoped. 
The final configuration that I tried was to make the width of the reverb zone smaller than the arc of the rotation modulation, in other words the source moved in and out of the reverb. This produced resonant peaks at regular intervals as the sound source passed through the reverb zone, which combined with the remainder of the 'dry' audio, producing a sort of 'gluing' effect, beginning to meld the sources together. 
This final effect presented a promising further avenue of investigation, as the resonant effect was somewhat reminiscent of an acoustic waveguide, which is a resonant object that guides sound waves in a particular way to produce a specific sound - a flute for example. The best tool for digitally simulating waveguides seems to be The Synthesis Toolkit (STK) which is a C++ library with a whole collection of functions and classes for digital waveguide simulation. Additionally, segregating these tones and manipulating the amplitude could result in behaviour more akin to harmonics than distinct tones, making the potential for musical utility far greater. 

You may also like

Back to Top