
Dear Izzionistas, after being away from Isadora for a while I just wanted to do a quick video mapping and got surprised: I expected to find the circles at the corners plus the little icons for scale/move, but there are not shown anymore...?
I still can move these points, but hitting becomes quite vague...
I am on Izzy 3.2.6 on a 23MBP M2 Pro with Sonoma.

If you use tungsten pars with a red green and blue filter in front even at low intensity it will emit a lot of IR light. This shouldn't Change, but you can change all the other lights IF you filter them out of your IR camera.

@bonemap Well we don't need a special solution for low light conditions. I did that for years. Any camera that see near ir light, flood with infrared that doesn't change and put a visible light filter in front of the camera et voila. Never had problems with it 90% of the time.

@bonemap a lot - if not most of the ML models are fine with black and white images - ie IR video (as is used by the TOF and structured light cameras you mention) - however you will need to provide the light source and the camera stream that can see the IR lit subjects - no pixels is no information. Kinect etx have their own light source - which of course you can use and feed into these models (ie a kinect IR video stream)- this may be a better approach than feeding the images into openNI which is essentially a dead outdated hack.
Not sure what Ml model you are using but there are many and this is a good one: https://github.com/MVIG-SJTU/A...
Fred

I have posted a 'rough cut' of a tutorial video on setting up Pythoner to work with MediaPipe to my personal YouTube channel.
Please let me know if you find any issues, or feedback regarding clarity etc.. I will do a round of updates to the video before making it available officially on the Troikatronix YT channel.
Attached here is an Example file (used in the video, but now updated).
Again, if you have suggestions to improve this let me know (it is meant as a starting point).
NOTE: Pythoner has the ability to use 3 different configurations for which python environment is used.
I will make a video to walk thru these and their pros and cons soon.
In this video, I focus on the global virtual environment setup Isadora supports.
The other options are a Local (to the project root) Virtual Environment, and the default environment included in the Pythoner actor (the most portable option, but limit in features).

@bonemap I think the best solution right now is https://www.move.ai/ BUT disguise has got its hands on it. So the price is high... unfortunately.

@fred said:
@skulpture your signature has more windows than mac so here are a few possible tips.
Use always up https://www.coretechnologies.c... to run Isadora as a service - you can lock out other apps kind of a kiosk mode when there isnt one
This looks intersting thank you. I use one called 'Restart on Crash' it kept a museum install going for over a year for me. The one you have suggested look way better. Thanks.Next - to hide the toolbar - you can use the overscan settings for your monitor - not the best solution, but it basically zooms in on the output using your GPU - no overhead and no real settings - once it is set for a screen it should just remember it.
A good idea - I will consider it. But I think I am goin gto use the stage output and use the stage mouse watcher.last - not sure if this is appropriate but if you switch to using a touch screen and disable as much of the keyboard assist as you can (not 100% possible in windows but you get close), and then re calibrate with the over scan you get a pretty convincing kiosk like display.
We are using a 'proper' touch screen kiosk - I am not sure on the make/model but its one of those portrait ones you see in museums/train stations, etc. It has a built in computer with decent spec.
Last - lots of extra work, but a cool option an OSC browser based control panel - https://github.com/colinbdclar... this library will let you use web based UI elements that send and receive OSC - chrome has an excellent kiosk mode that works well on most OS's and perfectly on Linux - this means you can run just about any old trash machine to do your UI and pass it on to Isadora.
Brilliant will look at this too. If not for this project I am sure it will be handy. At a glance it seems similar toOpen Stage Control and OSCAR
As always thanks @fred.

@skulpture your signature has more windows than mac so here are a few possible tips.
Use always up https://www.coretechnologies.c... to run Isadora as a service - you can lock out other apps kind of a kiosk mode when there isnt one.
Next - to hide the toolbar - you can use the overscan settings for your monitor - not the best solution, but it basically zooms in on the output using your GPU - no overhead and no real settings - once it is set for a screen it should just remember it.
last - not sure if this is appropriate but if you switch to using a touch screen and disable as much of the keyboard assist as you can (not 100% possible in windows but you get close), and then re calibrate with the over scan you get a pretty convincing kiosk like display.
Last - lots of extra work, but a cool option an OSC browser based control panel - https://github.com/colinbdclar... this library will let you use web based UI elements that send and receive OSC - chrome has an excellent kiosk mode that works well on most OS's and perfectly on Linux - this means you can run just about any old trash machine to do your UI and pass it on to Isadora.

Hi @armando,
I have also been following these developments for years and have Mediapipe running in Isadora through the Pythoner plugin. I have hand tracking, face tracking and pose tracking variations as separate Pythoner patches. There has been a fair bit of upkeep to these patches and the upgrade of patches with new versions of Mediapipe, Pythoner and Isadora. This has meant reinvesting in the integration with Isadora over time.
The BIG QUESTION for me using flat RGB video for these new AI and ML approaches is that they do not allow me to track performers in a theatrical setting - this means in lighting conditions that are not optimum for capturing the body as an RGB image - for example n darkness. It remains critical that body tracking for performance is optimised as agnostic to reflected light ie: works in darkness or with a variety of lighting and projection sources. AI and ML tracking has not proved itself in theatrical performance because it requires the tracking subject to be clearly represented in a video stream.
Structured light devices - like the Kinect and OoenNi variations are still important precisely because they operate without a visible light source illuminating the tracking subject.
But please, if there is an AI or ML solution that works in darkness without visible light, I would love to know about it!
Best wishes
Russell