@whatnao You could also run the projector in an activated scene with a keyboard watcher that listens for a trigger and toggles the projector on or off - so the movie keeps running and the projector is what goes on or off. Since the scene is active the keyboard watcher will react even if you are currently in another scene.
So the following is purely for fun in response to @mark 's post imagining how this would be done. I did follow up on this over the weekend and got something "working"
I heavily modified the project I mentioned earlier by manually rolling it over to the Tensorflow Lite c_api (a real pain!), then porting it to Windows, and feeding it the deeplabv3_257_mv_gpu.tflite model. To make it useful to Isadora, I dusted off / updated an openCV to Spout pipeline in c++ that I used a few years ago for some of my live projection masking programs, so now my prototype can receive an Isadora stage, run it on the model, and output the resulting mask to Spout again for Isadora to use with the Alpha Mask actor.
Now obviously, this is insane to actually attempt for production purposes in its current form. I'm getting about 5fps (granted no GPU accelerated and I'm running in debug mode). I could slightly improve things by bouncing back the original Isadora stage on its own spout server, but this is just a proof of concept. In this state, it should be relatively easy to port to Mac/Syphon and add GPU acceleration on compatible systems for higher FPS and / or multiple instances for many performers.
Again, just a fun weekend project but I found it very educational.
Hi, those are great solutions and I didn't realise Isadora could do that. I have some ideas regarding linking multiple eyes together to lock on to the eye shape and then put a cluedo cutout over with a gaussian mask. Have attempted a couple of versions attached sans eyes prior to this. Obv foreground faces not always more luminant than background and a variety of other problems inherent in a non machine vision approach. ndi fakebg.izzndi fakebg2.izz
Ultimately for our application we're going for zero user requirements so trying to avoid asking the user to do anything. However, it could be possible to gamify this step in the interim as an onboarding... Thanks!
regarding the kinect v2 keyed output. Although the camera captures 1920x1080 video the depth sensor is 512 x 424. Therefore the keyed feed matches the depth feed and is 512 x 424. That is a spout feed with alpha. You are correct, this app will only run on PC.