[ANSWERED] How do I use Effects with openNI Tracker (or motion capture)?
-
Hi guys! I'm a newbie who is currently taking interactive video class. I've been exploring Isadora software and been watching tutorial videos but I'm still confused how can I accomplish the video that I want to work on.
I was planning to use openNI tracker and a projection on floor. I want to make it interactive by making/causing ripples underneath people's feet as they walk around the floor (water projection). And it'd be awesome if I also could add water sound as people walk around the water projection...
I've been trying to work on this but I have no idea how could I accomplish this... if anyone could assist or help me in anyway, it would be very appreciated ;-;
Sincerely...
-
The openNI tracker is meant to be used with 3D depth cameras like kinect or orbbec. Mainly to recognise human bodies, which isn't easy from above obviously. Maybe, it is easier to use the depth video stream, offered by the openNI actor, coming from those depth sensor. (See 'output depth' value). In combination with the eye++ actor, which is tracking 'hot spots' in an b/w image, everything which is beyond a set threshold distance above the floor, would be recognised, tracked and could trigger something like a sound/ effect actor. (depth min/max values).
-
bear in mind that depth cameras won't work from overhead - they are looking for human skeleton shapes, so need to be placed on the stage - best at around 1m from the floor.
It is possible to get x and z coordinates of the people on stage from a depth camera, and then map a video effect to these coordinates and project that onto the floor from a projector above the stage. BUT - all cameras see a cone of light, and depth cameras like the Kinect are no different. you would only be able to track a triangular area of the stage, unless you use multiple cameras.
I have achieved the kind of effect that you want to make by using a camera overhead, mounted close to the projector. a black and white cctv camera with an infrared pass filter will see anything that is lit with infrared light, but will not see the projected image. i lit my actors with IR that is focused off the floor and used Eyes Actor in Isadora to track the resulting blobs.
You will need a lot of headroom - the projector and camera should be at least 5m from the floor (its possible to use super wide lenses if things are rigged lower, but then the distortion of the image starts to cause problems) -
regarding the depth sensor functionality restrictions, you are right regarding the body recognition function in openNI.
But as I mentioned, this is 'only' an 'extra feature', on top of the depth readings. Placed above the stage, only using the gray scale depth picture, you can 'cut out' everything below a defined level above the stage. Therefor everything below e.g. 1 meter above stage would be shown as black. Everything higher than 1 meter above stage level would be a gradient of gray to white, depending on value settings. This would give you a similar resulting black and white image like the IR images. This could then be processed by the eye++ in the same way. In fact, most sensors, like the kinect are based on an IR point cloud system. This is good to know, as IR intense lighting can corrupt the depth sensor data, and therefore the body tracking as well.
A downside of the depth sensors, are mostly restricted vision angles. As you write your self, this can be told about cctvs as well. But they are more flexible as one can chose cameras with certain fixed, or even interchangeable and zoom lenses. But on the other hand, getting the environment right for pure IR tracking can be tricky. -
@dillthekraut Yes I used kinects in both ways (frontal to get the skeleton) and on top (to get clean zenithal tracking) so I don't need visible light cutoff filter in front ofthe camera (kinect has one) and I don't need to put infrared light and therefore I don't need background subtraction.
Depth cameras still rock