[ANSWERED] Audio interactivity: Sound frequency watcher help
-
Hi all,
I'm working on a dance show and was hoping for there to be a section where the video reacts (basically flashes) on the clash sound in this file, which repeats a few times. I've been playing around with the sound frequency actor all day, and managed to get it to trigger for the first few clashes but as the soundtrack continues with more background sound, it struggles to pick out the clash.
There must be a way to do this, as alternatively it's going to have to be done manually but I really don't want to do that...
I'm running Isadora 4 and the sound is playing through a soundcard inputted to my Mac Mini.
Any advice appreciated
-
-
If the sound file is being played from a different computer, I'd suggest just sending OSC cues from that computer to trigger the reactions from Isadora on the second computer at the specific times the clash appears in the sound file.
If the sound file is being played from the same computer:- Convert it to an mp3 file so it can be imported as a video file.
- Put it in a Movie Player actor
- Right-Click the Movie Player actor and switch it to use timecode.
- Use a series of Timecode Comparator actors with their compare mode set to "ge" (greater than or equal to) to trigger the reactions at the correct timestamps.
-
@woland thank you! This is a new level of programming complexity for me, so I didn't think of these options. The sound will be played on another computer via Qlab - however I really don't understand that software and won't be able to access it until very close to opening night.
Another idea I had there would be to try cue the sound file at the same time as the sound computer, pying it on my Isadora in background and using your second guide - hoping for not too much user error...
-
This would be an example where a 'trigger by marker' actor would be very useful. The OP could take their soundfile, add markers in their sound editing program, which could then be saved in the file header or exported as XML, and then the event triggered easily at each marker in Isadora :-)
-
@pingdesigns said:
Another idea I had there would be to try cue the sound file at the same time as the sound computer, pying it on my Isadora in background and using your second guide - hoping for not too much user error...
I worked on a bunch of dance shows when I was in undergrad where synched audio and lights or audio and video was just handled by the Stage Manager telling the QLab/Isadora/lightboard operators to press the button at the same time.
This is essentially the method I was taught for calling cues like that: https://everythingbackstage.com/calling-cues/
Best wishes,
Woland
-
@woland Yes I think this is what I'm going to do, it's the most straightforward way - I do wish it was possible to set up this kind of interactivity though! Thanks.
-
@pingdesigns if the sound file was created by the composer maybe you can get a bounce of the clashes only as a separate file (or better yet a multi channel wav file that has the full mix and the clashes separately. Qlab can play multi channel wav files and route them to different outputs on a sound card. You then only need to get a line out from the clashes audio track from the qlab machine and plug it into an input on the isadora machine. From here you can use the capture in isaodra to get the audio of the clashes and analyse that track alone. This will give you clean and isolated audio that you can use as triggers to drive your video effects.
I know people do amazing things all manually cued but I, for some reason always prefer the precision and reliability of generated cues. I guess I am always scared of missing a cue or not being exactly on time so put in the extra work for automated workflows.
-
@fred said:
@pingdesigns if the sound file was created by the composer maybe you can get a bounce of the clashes only as a separate file (or better yet a multi channel wav file that has the full mix and the clashes separately. Qlab can play multi channel wav files and route them to different outputs on a sound card. You then only need to get a line out from the clashes audio track from the qlab machine and plug it into an input on the isadora machine. From here you can use the capture in isaodra to get the audio of the clashes and analyse that track alone. This will give you clean and isolated audio that you can use as triggers to drive your video effects.
Brilliant suggestion as always!
@fred said:
I know people do amazing things all manually cued but I, for some reason always prefer the precision and reliability of generated cues. I guess I am always scared of missing a cue or not being exactly on time so put in the extra work for automated workflows.
I also prefer airtight, automated solutions whenever possible, so your preference for them isn't unusual, but sometimes using humans is the only feasible solution available. I though, have myself been the human who pressed the button at the wrong time
-
@fred Thank you for this suggestion - I was just thinking that but unfortunately I don't think it's possible this time around in terms of time and how the track was originally made. It also still relies on access / knowledge of Qlab which I don't have... I agree, I would prefer to automate it but now I know for next time that with more time (and organising with composer), I could try this route.
-
@pingdesigns
If the sound file is recorded, you can set up Isadora to sequence the effects. As the sound file starts with a sudden noise (is this the 'clash' you are trying to sync to?) - you could use a Sound Watcher to listen for the start of the track and trigger a series of Envelope Generators so that the End Trigger outputs send your video flash cues. I've done this before when working with prerecorded soundtrack.
It's possible to do this using Trigger Delays, but I find that Envelopes are more obvious - you get to 'see' the pauses between triggers.
The way you chain them together can get a bit complicated, but there are many configurations: You could trigger all of the envelopes at the beginning of the track, each one a little longer than the previous one - this would help you to fine-tune the timing of each end trigger.
If it's being improvised, then you would need another method. How is the composer playing the sounds? is there a MIDI controller involved? you can network machines together and grab MIDI signals from their machine to trigger your video effects.
If it's recorded, can you play the sound file from Isadora? Then you will have timecode to link your effects to using Comparators.