Using Kinect as Mask Data
-
Hi All,
I'm new to Izzy, but loving the flexibility! I originally got interest to do a background subtraction for an installation, which worked quite well. Unfortunately since then the brief has changed somewhat. The background is no longer static, and therefore a subtraction doesn't work. (just to add I have no control over the background lighting or content at all)So I've started investigating using kinect data as the mask source. However do any of you know if;- its easy to do?- its reliable?- its possible to limit the "depth of field" of the kinect so that someone standing or moving a few meters behind the subject can be ignored?Look forward to hearing your thoughtsHenri -
- yes. easy. if you have NImate.
- pretty much. NImate syphoned into Isadora is usually stable until someone unplugs the kinect. then everything crashes.- depth of field of the kinect is pretty limited anyway (1.5m - 5.5m ish, depending on position) NImate has a screen where you can enter coordinates of the sensor area.its possible to do using other NI software, but NImate is - by far - the simplest solution i've come across.there will be latency. not much, but some. the ghost image out of NImate is a bit blobby, so you won't get a perfect mask. you'd need a much more complex and expensive IR setup to get a perfect mask, and this would need control over lighting and background.also: the kinect is pretty wide-angle, and needs to be close to the subject. this will give you a cone of sensor area. as the subject moves closer to the kinect, their mask will get bigger and start to cut off at the edges quite quickly. this solution will work best if the subject is moving on one plane.hope this is useful,john -
Cheers John,
Thats very useful info! I will look into NImate right now.We are not looking for a perfect mask (if anything i will add more gaussian blur to the incoming syphon) so it does feel like this could be the perfect solution.Thanks for your valuable input!Henri