
A thousand thanks. It works fantastic. Do you have an idea how to change the standard port and the format ? By every start it go back to 7500 and bundled Message per Axis and I cannot found how to change it.
Again a thousand thanks for this found.
Best regards,
Jean-François

Hi Mark, I downloaded Movement OSC and it runs smoothly on my Intel Mac. OSC out to Isadora works for me in bundled xyz mode, with a change of port number. I'm teaching a session with some robotics students today, so looking forward to testing it out with them. Thanks @Armando - this is a great resource.

@jessicacohen I think this depends on what you want to do with the depth camera - overall I would go with something newer - check the Orbec Fmeto Bolt https://www.orbbec.com/product...
This is basically the Kinect Azure hardware repackaged - it supports windows linux and OSX. It seems to have good support in Touch Designer (link below)- but it will not work with Isadora.
If you want skeleton tracking the only Isadora option is using OpenNi which is a long dead and no-longer updated library. Although some legacy code and resuscitation make keep it going for a while, I would look into other options.
The Kinect Azure did come with Native skeleton tracking for Windows and Linux -again available in TD, but not Isadora (the large mac user base makes this a difficult implementation).
For Skeleton, Face and Body tracking there is a thread about using Python and media pipe https://community.troikatronix...
The solutions discussed there do not need a dpeth camera, and can be quite good. I have not used mediapipe in Isadora but have used it in Openframeworks using this addon https://github.com/design-io/o... not that I expect you to go with this code, but just as an example of what it can do.
The only real missing piece here is precise distance data (it makes a good guess but it is essentially 2d). Although a depth camera is not needed to use Mediapipe, using TD - which gives you access to a depth stream from the Femto (or maybe in Isadora using python) - there are helper functions where you can look up a pixel coordinate (joint coordinate) from the RGB camera and get its real world position from the depth camera.
It feels like we are at a bit of a crossover point with this tech - depth cameras are not so necessary for skeleton tracking and will be less so, but they do still have advantages - likely because RGB skeleton tracking is getting so advanced no one is really pushing the depth stuff.
Using the Orbec SDK it would be trivial (for a programmer) to make a viewer that streamed the RGB, depth and pointcloud over Syphon/Spout - I dont have the hardware but if someone lends me something I would give it a go.
The other issue to think about is connectivity - good recent depth cameras are using USB 3.0 which is expensive to extend long distances.
There is also the Zed cameras - with skeleton tracking in the SDK https://www.stereolabs.com/en-... also supported by TD https://derivative.ca/UserGuid... again no OSX support.
I think for solutions where I would have just automatically used a depth camera, right now I would take a better look at what kind of tools can solve the issue I need. Google's Mediapipe (https://github.com/google-ai-e...) is kind of winning at the moment and can be used within Isadora via Python.
If you have a specific use case that you need to solve maybe I can give more pointed advice.

Hi,
I have been unable to get the new release of Orbbec Astra cameras to work with OpenNi module in Isadora. I made some enquiries to Orbbec support and they informed me that their current Astra models are not compatible with Isadora’s Open Ni implementation. Basically they informed me that they are not the same camera evidenced by the change in model number. Although the current Astra and Astra Mini look identical to previous devices they are not going to work with OpenNi in Isadora.
The Astra Mini I purchased direct from Orbbec sales works with the Orbbec Top in Touch Designer but not with Open Ni in Isadora.
I was able to find an older Astra in the second hand market and it works great with Open Ni and Isadora in Rosetta mode on a M series Mac.
Best wishes
Russell

It might be possible to do this with a spline path. Although I haven’t tried this myself with the ‘3D Stage Orientation’ module, I have a workflow for 3D paths implemented for 3D objects and particles etc. in Isadora.
I start by making the shape with requisite number of points in a 3D software and export. I then open the 3D file in MeshLab and export as a JSON file. In Isadora the ‘JSON Parser’ (it might require multiple) is then configured to read out the point data (x,y,z). This is done simply enough with a ‘Pulse Generator’ and a wrapped ‘Counter’. The streams of x,y,z would then each be fed to a ‘Limit-Scale Value’ before linking to the ‘3D Stage Orientation’.
The other thing to look at is the ‘3D Ropes’ as this has a follow path built into the module. You can limit the parameters to one rope and control curve etc. the actor has option to reveal coordinate xyz outputs that travel along the rope as a 3D path.
best wishes
Russell

@gibsonmartelli said:
use a different keyboard, move usb key to different port
Those are the options that come to mind. First try to move them to separate ports (opposite sides of the machine if possible).
If that doesn't work, you might be able to insert the Keyboard dongle after opening Isadora (the HID check is likely the only thing that will fail here). Although this will be awkward.
If all else fails, try using another keyboard. It would seem this one doesn't support HID is a standard way.

Hey Armando,
Thanks for sharing this.Seems to work very well, but am having a little issue getting data into Isadora.Do you have an example patch using Movement OSC that you could share, please?
Thanks
Mark (not that Mark...)
@2250watt Also i just saw this: https://community.troikatronix...
@jessicacohen Hi there! I’m also working with motion tracking for the first time in a project and have had great results using VisionOSC: https://github.com/LingDong-/V...
It’s based on Apple’s Vision Framework and runs quite smoothly. The only downside is that it requires a fair amount of computing power—but for single-type tracking, it performs really well. I’ve also built a User Actor for the Pose Detection feature, in case you’re interested! ☺️