[ANSWERED] Kinect 1 and izadora 3: ghost and osc, how to make it work?
@bonemap Dear Russel thanks for your clarifying reply; -)
if a comparison post between Kinect and Orbecc exists could you send me a link? really interested in knowing better the different capacity of various sensors (good to know about environment protection though ! ).
When you tell about data recording is it different with getting osc datas from various skeleton part as john said? (not clear for me the recording/output osc difference)?
I need to start the project fast and don't wanna buy the licence to get ni mate osc... do you know if former tutorial with processing still works on Isa3?
many thanks for you lights !
do you know if former tutorial with processing still works on Isa3
It does not
@woland thanks i was guessing that cos all webpages are down...
too bad... thanks guys !
the Isadora solution will not have skeleton data recording
Easily solved with a pair of Data Array actors
spatial trigger point calibration
Comparators + fuzzy logic and/or Eyes/Eyes++ sort this out, if I understand correctly
so shall i go back to NI mate and Syphon or wait a bit more? did i miss the pandora box in the existing plugs?
If you open a support ticket and I can add you to the beta program to try out the plugins.
Bennnid last edited by Bennnid
done ! thanks a lot!
@woland i was thinking the same...-)
the Izzy stuff is not currently compatible with Kinect 2
Yes, the Kinect 2 provides much more complex data than the other cameras so Mark is planning on handling it in its own special plugin.
@dbini said: output xyz data for the major skeleton points (up to 6 users simultaneously).
that's amazing ! 6 simultaneous...damn !
"currently there is nothing that works like the Ghost output of NIMate, but there is depth image, colorised bodies and a cool skeleton vector. "
so with an alpha mask, a chroma key we can get a ghost or is it different?
Bennnid last edited by Bennnid
I 'd like for another install, to get tracking of persons from the top (from above a room for ex)... getting dots would be enough ( don't need full body recognition)... do you think this could be possible?
dbini last edited by
with a chroma key we can get something like the Color ID effect of NIMate (a bit better - there are some depth details in the body) but the blobby Ghost effect needs to be simulated with blurring and thresholding. Top down positioning should be the same as with NIMate - a little less stable than if used how the Kinect was designed to be used.
really interested in knowing better the different capacity of various sensors
To answer your question, I do not know of a comparison that I can refer to you. One thing is that the low cost Xbox Kinect is no longer in production. Microsoft now offer the Kinect Azure which appears much more costly developed for the AI market. This, theoretically, opens the door for other manufacturers. I think the success of Isadora developing the depth image plugin also rests on the availability of low cost sensors. I have been particularly impressed by the Orbbec Astra Mini as it has a much smaller form factor than a Xbox Kinect or any other model of sensor - so this provides more versatility. Along with the casing that is dust and waterproof as mentioned before, it requires only power through the USB connection - no need for additional power supplies. This resonates for me as part of what I do is outdoor events. So the possibility of using a depth sensor while performers are in or near a lake (for example) appeals to me. One of the things I am currently testing is the operating cable length I can use with the sensor. So far I have managed a stable connection at 10 m using 2x powered usb3 cables. The Orbbec sensors don't work with NiMate at all so I am investing project development in Isadora. Thanks @mark for making this possible.
The NiMate software has a number of different modes of operation, skeleton tracking being just one of them. The Trigger and calibration options are quite sophisticated along with the ability to record the depth sensor image to an external data file that can be replayed back into NiMate. This is a key feature of NiMate - a depth image can be recorded and replayed or looped to generate tracking, trigger and skeleton data. It allows development of systems without needing a performer constantly in front of the sensor. So there is an economic imperative in this as a feature request in Isadora. I wish I could afford to pay a performer for many hours while I undertake fiddling, tinkering system development. Although, @Woland is possibly right suggesting that many of the advanced triggering modes and feature sets in NiMate can be emulated in Isadora, once the new Plugin is available, but I would say that emulating some aspects of what NiMate does will require advanced Isadora programming skills (such as depth image recording and playback through the plugin). Let’s hope Isadora's advanced users share some of the user actors they develop to extend the range of features possible from the depth image provided by the plugin! (my hands clasped in hope and anticipation).