A still from the video in Universal Translator
The interface for this work is a microphone with a micro video camera embedded in its head so that the camera looks directly at the mouth from very close up. The sound of the voice and video of the moving lips are captured by computers. These sounds and images provide most of the content, and are used to control most of the interactivity of the work. A computer monitor faces the interactor and displays the processed mouth images.
Sound is recorded into computer memory, and is available for analysis, processing and output immediately. The sound is analysed for phonemic content, and vocal intensity information. This information is be used to control various aspects of the sound processing and output. Incoming sound is fed into a live granular synthesis system so that the sound may be stretched (with or without pitch shift), shortened, shattered, and diffused in various ways. The incoming audio is also chopped into syllable-like fragments which are stored and replayed in response to features in the incoming audio. As a result, a dense, hovering soundworld is constructed using only the sounds provided over the recent past by the interactors.
The video aspect of this work is intended to be secondary to the audio. The screen is initially black. When the microphone hears a sound, the video of the mouth fades up into visibility, fading out again when the sound is finished. When no sound is heard for some time, short video clips appear that comment on the relationship between language and the body. (Next)
Copyright 2000 David Rokeby / very nervous systems / All rights reserved. 12/11/00