Need artistic advice on sound-reactive video project
-
Hi, everyone
I am experimenting with live concert footage, putting various effects on it, and also trying to make it sound-reactive. Basically, I want an interactivity element where people can sing back to the performer and trigger enhanced visuals that act as "rewards" for their participation. I'm having so much trouble making the visuals compelling, though. I have no idea which combination of effects can give it more of an impact. Right now, I have it so that sound-level watcher triggers the mix amount of video mixer, bringing more or less of an outline layer into the footage (depending on sound level). However, it just looks so boring, and I don't know what else to do with it. I also thought about overlaying some sort of graphic equalizer that lets the audience know that their voice is what's influencing the visuals, but then there's the problem of it being triggered by the built-in audio, as well as the live participants. Mainly, though, I'm just stuck for ideas of how to make this visually striking. I'm sorry if that's vague, but if anyone has any suggestions of how they would approach it, or what might look "cool," I sure would appreciate it. I'm attaching a screen recording below. Right now, it's reacting to the sound of my clapping because my sound recorder will not play the sound out loud when it is recording. I'm also including the original file, "vocal warmup" so you can see what it looks like.
Thank you!!!
dc53e4-concertaudio.mp4 f61ec4-vocalwarmup.mp4 -
Sorry, guys, turns out my file was not playable so am now working on fixing that...brb
-
With audience responsive stuff, the audience needs instant feedback that their input has a direct influence on the environment. I think you need to separate the audio feeds, maybe have two systems running - one responding to the music, one responding to the mic.
You're using a lot of CPU processing in that patch, so your framerate is low.Try using the Explode actor in different ways - I often use it as a base for audio-responsive visuals because it gives immediate feedback. The Effects Mixer is also a good one to experiment with.The idea of a graphic that gives the audience a direct indication of their input is useful, but it could be something very simple in addition to the video effects. -
Hello,I am working at the moment on that kind of project, not sound but presence with kinect.I think you have to see toward procedural drawing who can react elegantly and quickly to incoming events.You have there two choice:– use the new glsl shader actor, searching a bit on internet, you can find very interesting proposition to begin.here is the best one from my point of view (it needs some tricks, as suppress u_ in front of some variable, but very mind opening)https://thebookofshaders.comand the Mark's tutorial is a necessity to read beforehttp://troikatronix.com/support/kb/glsl-shader-actor-tutorial/– use the procedural computing possibilities of Processing, sending event from Isadora with OSC and sending Image from Processing with Syphon.In my project, I use Processing to connect to the kinect, shader inside Processing to compute the concept depth and produce the final image. Isadora send via Syphon the stock images I will mix with real time image produced by Processing.Hope that helps,Jacques -
I might overlay something that takes time to develop.... so that the user is included to continue with the interaction to see what unfolds (perhaps another visual gift is given once a level is reached).
Additionally, something that shows the levels produced from the input could be made to give the user a sense of influence. -
Guys, this is incredible information that would've taken me long hours to figure out on my own--thank you so much! Can't wait to put it to use; I will post an update after I play around with it more:D Sorry for the late response btw. Been a crazy couple days