I was thinking about the connections between DSP for audio and audio processing / synthesis techniques and motion capture. Because you can think of both video & audio in terms of signals, they have a lot of similarities and can use the same techniques. You often have to transform signals into feature spaces... (ie, extract features) then work from there.
One universal problem is defining a perceptual quality (whether it is an action, how how an action is done, or a timbre color or pitch) within a computational space. Sometimes there seems this cruel quality in both practices: after all my human brain can track the motion. I understand what the timbre is and when it occurs. But my software doesn't have access to the tools that my brain does (yet). Nor has it been exposed to the years and years of training my brain has had to distinguish these qualities. This is very obvious to anyone in my field, but still, when I step back, it seems a bit poignant.
Of course, my brain can't generate a real-time control signal from movement to send to my audio synthesis routines, so there's that. :) Although I can use the information I have in the form that I have it to make noise via my physical body.
No comments:
Post a Comment