
Hello everyone! We're in an emergency situation...
While testing for an installation, we encountered the following problem: the Mac mini M4 Sequoia with 16GB of RAM doesn't recognize Matrox TripleHeadToGo DP. The Display Manager recognizes it, but it doesn't output an image.
Then we tried with a Matrox TripleHeadToGo SE, and it does output an image, but with a glitch, very distorted 3040x768.
I searched for some info on the forums, and it seems to be an issue with one or two Mac operating systems at the moment. According to Matrox, it outputs the full resolution at 50Hz, but Mac doesn't allow you to change it.
Has anyone experienced this situation? Or found a workable solution?
Thanks for everything!
Best,
Maxi

@tomthebom are you able to take a screenshot please? Seems like a GUI issue.

Not sure exactly what's happening here, but maybe try deleting your mapping slices and re-making them?
Also, if you post the file and tell us what Scene your mapped projector is in, we could take a look for you.

Dear Izzionistas, after being away from Isadora for a while I just wanted to do a quick video mapping and got surprised: I expected to find the circles at the corners plus the little icons for scale/move, but there are not shown anymore...?
I still can move these points, but hitting becomes quite vague...
I am on Izzy 3.2.6 on a 23MBP M2 Pro with Sonoma.

If you use tungsten pars with a red green and blue filter in front even at low intensity it will emit a lot of IR light. This shouldn't Change, but you can change all the other lights IF you filter them out of your IR camera.

@bonemap Well we don't need a special solution for low light conditions. I did that for years. Any camera that see near ir light, flood with infrared that doesn't change and put a visible light filter in front of the camera et voila. Never had problems with it 90% of the time.

@bonemap a lot - if not most of the ML models are fine with black and white images - ie IR video (as is used by the TOF and structured light cameras you mention) - however you will need to provide the light source and the camera stream that can see the IR lit subjects - no pixels is no information. Kinect etx have their own light source - which of course you can use and feed into these models (ie a kinect IR video stream)- this may be a better approach than feeding the images into openNI which is essentially a dead outdated hack.
Not sure what Ml model you are using but there are many and this is a good one: https://github.com/MVIG-SJTU/A...
Fred

I have posted a 'rough cut' of a tutorial video on setting up Pythoner to work with MediaPipe to my personal YouTube channel.
Please let me know if you find any issues, or feedback regarding clarity etc.. I will do a round of updates to the video before making it available officially on the Troikatronix YT channel.
Attached here is an Example file (used in the video, but now updated).
Again, if you have suggestions to improve this let me know (it is meant as a starting point).
NOTE: Pythoner has the ability to use 3 different configurations for which python environment is used.
I will make a video to walk thru these and their pros and cons soon.
In this video, I focus on the global virtual environment setup Isadora supports.
The other options are a Local (to the project root) Virtual Environment, and the default environment included in the Pythoner actor (the most portable option, but limit in features).

@bonemap I think the best solution right now is https://www.move.ai/ BUT disguise has got its hands on it. So the price is high... unfortunately.