Robot 1.2 – new Direction
Last year, the development of Robot 1.1 was an experiment in electrical Engineering and the Psychology (In terms of
rhythmic human-computer interaction). It is a hardware interface which interprets ambient sound as structured pulses (using low pass filter, peak detector and tempo filter circuits) that are fed into a microcontroller which controls a solenoid powered drum stick (see previous post for video):
I have been pondering both applications and new directions for this technology. One of the main questions I kept running into while working on this project, was the question of response time. In order for a rhythmic system to be interactive, it must operate in real time. The hardware aspects of Robot 1.1 were very effective at generating a real time response. Although, the functionality was limited:
1. Wait for a sound pulse which is within a frequency band and above a amplitude threshold. Both the frequency band and threshold are controlled by variable resistors.
IF [a second impulse is detected] AND [it occurs at least 0.1 second, but less then 1 second later] THEN fire solenoid
This initial hardware circuit functioned well on rhythms with a strong beat (such as a kick drum). It had the effect of ignoring sequential pulses (usually created by off-beats or accents – we we want to ignore), scaling down it’s response. For example imagine the effect creating by clapping quickly beside a clap-on lamp, the switch will respond on regular clap intervals.
The purpose of the microcontroller was to share the fire solenoid signal and use it as input. The rest of the response was controlled by software. Our software was only for test purposes, it performed basic averaging to calculate tempo. It turned out that the device functioned better without the software running, unless the player readjusted their playing to the tempo calculation of the system (HALLE dealt with this issue with a call/response structure) Current software methods for performing beat induction was researched in a past paper of mine. It is now my intention to explore the potential for software advancement of this device.
Aside from research done on other systems such as HALLE, I have been directed to a interesting project going on at UWO called AMEE – Algorithmic Music Evolution Engine. This is a dynamic music composition system using a pipelined architecture in the generation process to allow a structured, yet flexible approach to composition. Immediate applications of this technology are within the video game domain, allowing games to alter music “on the fly”. It is based heavily on pattern libraries to store existing musical information. I’m planning to meet with the project leaders soon to find out how rhythmic structure is modeled in this system. Also, I’m curious about possible advantages of data mining applications vs. classical music theory.
Updates to come…
Leave a Reply