Synopsis of the Music
The music for e-Motion was not "composed"
in the traditional sense, but was instead a design for an interactive
performance space that facilitated the sonification of the dancer's. motion.
Unlike previous work I have undertaken utilizing similar technology, the
music for e-Motion differed because it used a three-dimensional motion
capture space for the control of the music.
To make this possible, it was necessary to use a minimum of two video
cameras whose combined perspectives formed a 90° angle when placed on adjacent sides of the marley
floor (where the dancer was performing). By
overlapping the view of both cameras in this way, the dancer’s movement could
be viewed in any direction: front-to-back, side-to-side, up-and-down, etc., as
opposed to simply looking at their motion in 2-D.
The output from each camera was analyzed in real-time on separate
computers (logically dubbed “Computer A” and “Computer B” for their
respective associations with Camera A and Camera B).
The data from the analysis of each camera’s output was translated into
MIDI (Musical Instrument Digital Interface) data using Max (a graphic
object programming language for interactive computer music) and then realized in
musical sound using an external synthesizer.
The sound was diffused according to the dancer’s location amongst four
speakers arranged quadraphonically around the motion capture space.
The
picture below shows the top-level Max patch for Computer A.
(locked).gif)
PREVIOUS
NEXT