KinectV2 OSX
-
So far there is no middle ware for OSX capable of decoding the skeleton data or anything other than the video streams provided by the app in this thread. It may happen in the future but I would not rely on it. On windows using the SDK, you can get full access to the higher functions of the camera, there are some solutions for sending skeleton data over OSC. The app in this thread will only send the image streams which is marginally useful, but not life changing.
-
@ Fred
Does that include Ni Mate?https://forum.ni-mate.com/t/os-x-test-build-for-kinect-for-xbox-one-and-kinect-for-xbox-360/586cheersbonemap -
@bonemap there has been some activity on the libfreenect forums on this front (incorporating openNi code to get skeleton data with an updated libfreenect). In the end there may be some kind of fruitful activity, but I doubt it will catch up to the speed and efficiency as well as the large feature set on windows. This post pretty much says it is unstable and un reliable, so yes, here is something, but it does not sound ready for shows.
Who knows it may end up working ok one day, but it has been quite a long time this works perfectly on windows, MS even took the effort to work with the creative communities of Cinder and Openframeworks to create a set of tools for using it on windows.[bonemap](http://troikatronix.com/troikatronixforum/profile/248/bonemap) did you try this? -
Thanks @Fred,
No I haven't tried it. It does not seem to be worth the effort at this point. What are your thoughts on Apple in this area? For example, there was the Apple purchase of PrimeSense some years ago now, but nothing has emerged, except rumours about depth sensors built into future iterations of iPad etc.I don't know if it is worth waiting for Apple Mac to offer development in this arena? Or perhaps it will be the next big launch or new technology for Apple?cheers,bonemap -
I am trying to align the Depth image and the Color image in OF (from Kinect 2).I can't find information on how this alignment should work. I know the color is 1920*1080 and the depth 512*424but scaling the depth upto to the height of the color doesn't align the images. Any knowledge / pointers ? I can get it workably close within Isadora.. but not perfect. -
@DusX Ok, lets go back to some fundamentals, no amount of scaling and quad warping will actually line up 2 cameras, or a camera and a projector. The lenses, sensors and imaging systems will produce differently warped images (no image is not warped) and have different extrinsics, instrinsics and FOV's. The offset is not linear and needs a complicated algorithm to transform between one and the other.
Isadora misses fundamental tools to do this. It can be done through some calibration (like you can see with camera calibration in openCV).Microsoft have of course prepared this transformation in their SDK through the coordinate mapper that is accessible in OF in the windows only addon ofxKinectForWindows2If you are on PC in OF you can see a bit how this works with these functions (this is not the place to go into depth into code so here are the method names)virtual HRESULT STDMETHODCALLTYPE MapCameraPointToDepthSpace(virtual HRESULT STDMETHODCALLTYPE MapCameraPointToColorSpace(You could prepare a mesh, or reconstruct the coordinate mapping and use a shader in the GLSL tools now available in Isadora to achieve this, but first you need to reproduce the coordinate map.I have wanted this kind of intelligent image manipulation in Isadora for a long time, camera calibration, and camera to world/ projector calibration would be a great tool and is something that underlies many questions that come up on the forum- projection on tracked objects... -
Funny, as always.. shortly after writing the previous post I found the coordinate mapper function in the ofxKinectForWindows2 addon (that is what I am working with)I see that nearly what I want to do is already done in the BodyIndexColor example, so I think I can simply port that code with some minor changes (previously I was building from the Base example)Thanks for the info. I had hoped that the images were corrected for alignment upfront (in the kinect hardware before exposing the images). Simply wasn't sure of how the Kinects structure/logic is setup. -
Hi all,
@Fred thanks so much for "KinectV2_Syphon"!
We are testing the app on El Capitan Macbook Pro. Works good. We can't make it work on Yosemite Mac mini though. We tried debugging by unplugging the power chord just like mark recommends and still nothing. Only a black image.
We are wondering if the app does not work on Yosemite?Thanks!
-
Hi, I did not compile the app for Yosemite, I don't have any machines with Yosemite still running. I will see if I get a chance to do it in the coming days and send it.
-
We updated to Sierra since we thought it might be a Yosemite issue. Still showing only a black image on Sierra. Do you recommend plugging the kinect to a PC and run SDK to do the handshake or what are we missing?
thanks
-
Your Mac mini does have usb3 right?
-
It does not have usb3. Only usb2 ports :I
-
The kinect V2 is a USB3.0 device, this is why it does not work. You may be able to use something like this
http://www.sonnettech.com/product/usb3gigethunderboltadapter.htmlif you dont need the thunderbolt port. Warning I have not tested this with the kinect. -
Hello,
Just a little bit of personal information/experience– I am working at the moment on an installation with a kinect V2, using only depth information. I use Processing with Open Kinect library on a mac, it work perfectly, precise and reliable. It possible to make the crunching number process in Processing and send the image to Isadora via Syphon. I tried to use a kinect V1 in supplement to have skeleton information, it works well but finally I doesn't need it.– In the last beta version of Millumin, you can plug kinect V1 and V2 and obtain depth informations and skeleton information on Mac (Millumin is Mac only). I know the people making Millumin but we are not "friends" and a little bit in concurrency on the software field., I am a little bit the "Isadora guy" ! But it's a proof its possible. Unfortunately, its very hard to output OSC from Millumin.Jacques -
@Fred
Thanks for the support! -
@jhoepffner thanks for the info. I knew skeletons with v2 were coming on OSX but this is the first release implementation I saw. I checked out there latest nite and openni and see that it should all be working. I will have some time over the break to see if I can get this going.
-
Hello,
I am in need of Kinect version 2 skeleton data in izzy. Ni-mate people sent me a beta that supposedly does that. NOT WORKING unfortunately,So, sadly I had to turn again to Touchdesigner under pc that allows plugging a kinetic v2 directly and has "soft" lines with gravity, elastoviscosity algorithms. etc,But my heart belongs to Izzy! -
Hello,
I just received my second kinect adaptors and i tried to plug two kinect V2 to my mac.It works with one usb3 directly on the mac and the other one through a caldigit thunderbolt box.No skeleton but depth information (my interest at the moment) perfectly usable.Unfortunately I must do all the computation and the image production in Processing because there is no way to pass it directly to Isadora.Jacques -
@Armando there is also this: https://github.com/microcosm It is windows but will get you OSC of the limb parts, although at law check it was xyz only no rotations.
-
Thanks, @Fred, I'll try soon and report