Motion to Music Interrelationships (continued)
Additionally, the dancer’s location was used to control pitch, timbre, and sound diffusion. Whenever the dancer’s movement triggered a zone, the index number for the triggered zone became the MIDI note number of the note event corresponding to the change in zone values. This meant, for example, that zone #55 for Camera A (located in quadrant 3) would correspond to MIDI note 55 (unless it was transposed for esthetic purposes, as will be explained later). Assuming that the MIDI note numbers are then being filtered through a standard twelve tone equal-tempered tuning, it is already possible to predict the prevailing harmonic quality the music will have by simply looking at the distribution of zone numbers on the grid. With each quadrant being built from an integer series based on increments of four, the numbers contained within the quadrant will be mapped to members of the same augmented chord, albeit in various octaves. Furthermore, since the dancer's motion most frequently occurs as a trajectory through the same area (they don't disappear from one quadrant and reappear in another) it follows that distinct collections of augmented triads (in various registeral distributions) will occur and be audible. In a way, this system resembles a giant 3-D pitch lattice that can be played by moving within it. Although the preponderance of augmented chords was actually an accidental byproduct resulting from trying to find an efficient solution to the problem of locating the dancer's position in the motion capture space using a modular operation, it turned out that the resulting “neo-impressionist” sounding harmonic quality resonated in a satisfying way, esthetically speaking, with the character of the exhibition and the ambient sound of the museum itself. There was also the added bonus of facilitating palpable connections between pitch space and physical space on a perceptual level. In future projects it would be easy to circumvent this particular mapping by redistributing the zones within the grid, or creating a separate algorithm for generating pitch.
Sample plane from a pitch lattice derived from the distribution of zones within four quadrants.
Eb4 | G4 | B4 | Eb5 | D4 | F#4 | A#4 | D5 |
B2 | Eb3 | G3 | B3 | A#2 | D3 | F#3 | A#3 |
G1 | B1 | Eb2 | G2 | F#1 | A#1 | D2 | F#2 |
Eb0 | G0 | B0 | Eb1 | D0 | F#0 | A#0 | D1 |
C4 | E4 | G#4 | C5 | Db4 | F4 | A4 | Db5 |
G#2 | C3 | E3 | G#3 | A2 | Db3 | F3 | A3 |
E1 | G#1 | C2 | E2 | F1 | A1 | Db2 | F2 |
C0 | E0 | G#0 | C1 | Db0 | F0 | A0 | Db1 |
The dancer's location was also mapped to timbre in a 1:1 correspondence. Each of the eight quadrants (between Cameras A&B) were associated with a particular MIDI channel, and each MIDI channel was assigned a particular sound on the external synthesizer. With four separate audio outputs on the synthesizer the sounds were distributed in isolation to one of four speakers arranged quadraphonically around the motion capture space. Each sound was also sent to a subwoofer to achieve added bass resonance. The mapping of the particular quadrants to the four speakers was done so that the dancer's location would be paralleled by the sound diffusion (via the activation of a particular timbre in a fixed location). The way it appeared to the viewer was that the sound seemed to follow the dancer through the space, and the timbre changed depending on their location.
The table below shows the mappings of MIDI sounds to speaker locations for the acoustic sound set.
Camera A | Camera B | |||
Quadrant 4 Electric Piano MIDI channel 4 Speaker 4 |
Quadrant 3 Harpsichord MIDI channel 2 Speaker 2 |
Quadrant 4 Drums MIDI channel 7 Speaker 3 |
Quadrant 3 Kalimba MIDI channel 8 Speaker 4 |
|
Quadrant 1 Celeste MIDI channel 3 Speaker 3 |
Quadrant 2 Marimba MIDI channel 1 Speaker 1 |
Quadrant 1 Vibraphone MIDI channel 5 Speaker 1 |
Quadrant 2 Acoustic Guitar MIDI channel 6 Speaker 2 |