@tarox said:
Now I have the problem that all videos start at the beginning of the scene and I don't know where to switch this off...
Now I have the problem that all videos start at the beginning of the scene and I don't know where to switch this off...
If I correctly understand what you're looking for, you may use the speed input instead of the visible input. The on/off of the visible input starts always at the beginning of the clip (play/stop). With the speed input, it works like play/pause, and if you hit the key later, it will start at the position you stopped it.

You need the absolute value to switch between 0 and 1. Without it will switch between -2/+2
Check out my comment here for some cueing examples: https://community.troikatronix.com/topic/9061/answered-reverse-cue-sheet/3?_=1770373876276
If the goal is to trigger the videos in a set order, I think your best bet will be connecting the output of a Keyboard Watcher actor to the input of a Sequential Trigger actor, then connecting each of the Sequential Trigger actor's outputs to a different Movie Player actor's 'visible' input. That'll let you hit the same key one at a time and trigger the videos in sequence.
You'll also need to initialize the 'visible' input of each Movie Player actor as 'off'.
You can learn about initializing values from our YouTube tutorial on initializing values in Isadora.
If you want to trigger the videos in an order that's not set in stone, you could initialize the 'visible' input of each Movie Player actor as 'off' with a separate Keyboard Watcher actor connected to the 'visible' input of each Movie Player actor. (You'll want to set the Keyboard Watchers for the different pairs of Keyboard Watcher+Movie Player actors to a different key so each video has a different key that triggers it). If you want to be able to turn them on and off with the same key, you can put a Toggle actor between each Keyboard Watcher actor and Movie Player actor.
Here's an old .dmg from one of my computers: https://drive.google.com/file/d/1MHFYkEOJcO5TEevw7Bi2KSuKR8m54iTG/view?usp=sharing
I'm not sure that it was ever updated to run on Apple Silicon computers, so even after you download it, you'd have to try running it in Rosetta Mode and hope that it works.
Best of luck to you :)
Additionally, taking a look at the timing logic from this example file of mine might be helpful for understanding more about how to do this kind of thing: https://troikatronix.com/add-ons/random-media-random-duration/
I swear that sometimes the forum hides posts from me for a while...
Anyway, here you go.
FILE DOWNLOAD -----> fade-out-video-x-seconds-from-end-4.1.3.izz

Any news on this issue from your side?
There’s more audio capabilities coming with the next release. You’re already in the beta program so you should have access to the new beta and new audio actors. Have you tried working with any of those?
In particular, the new Audio Frequency Bands, Audio Frequency Watcher, and Audio Level Watcher let you do live audio frequency and level analysis without needing to go through the Live Capture system (and thus lets you use as many virtual audio sources and/or mics as you want).
P.S. Anyone else reading this who’s interested in trying out the beta with the new audio actors (which also adds VST3 audio plugin support, a Audio to Text actor, and more) can send in a ticket using the link in my signature below.
You can probably also use a normal camera with the Freeze + Difference method in this tutorial: https://troikatronix.com/add-ons/tutorial-basic-motion-tracking/
And which is explained here: https://vjskulpture.wordpress.com/2009/12/14/motion-tracking-in-isadora
But you’ll likely need to use other video actors like Contrast Adjust to make the dancer pure white.
It’s also very lighting dependent, but it works from further away than a Kinect.
Really there are a number of ways to do this. In the end it comes down to what your input material looks like?
How close are the dancers to background objects? Is the background a solid color or something else? How much control do you have over the lighting?
Current tools like MediaPipe, can be used via Pythoner to use AI to create this masking, and will work with an RGB camera, but will be limited in framerate and resolution by the machine processing the feed. While others options can use different keying methods to mask the dance/background separately. The example given was likely shot on a greenscreen at the time, and had the foreground chromakeyed from the background. Luma key can also be used to separate light and dark, but this will make separation of things like feet on the ground very difficult.
If you are running on PC, and have access to a Kinect V2, my kinect2share software can provide the silhoette as well.