Looking for examples of what you can do with the Kinect or other Depth Cameras
mark_m last edited by
I'm working on a project where we've used the Kinect to do simple blob tracking. The artist I'm working with is very keen to see what else the Kinect is capable of.
Is anyone able to share any work that they've made using the Kinect (or other depth cameras)?
I'm quite keen also to see what people have done with the skeleton tracking possibilites in the OpenNI tracker / Skeleton Tracker / Skeleton Visualiser.
Thanks a lot
Mark in chilly London town.
In one project I calculated the position of each user in the space as they passed thru (it was a controlled entry) and used this position data to direct many eyes (projected to the hallway walls) to look directly at them. Sorry no video.
This ability to triangulate a position relative to other positions can be rather interesting. Sort of making objects aware of whats being tracked.
bonemap last edited by bonemap
Here are a couple of tests composting completely in Isadora with the OpenNi plugin. I also wrote some reflections on using OpenNi here:
dbini last edited by
Hi Mark, i usually use the depth image from the Kinect to create effects like in this one:
but have also used skeleton data to send MIDI to Ableton Live so that the sound environment responds to bodies moving in space. sorry i don't have any examples of that in action...
artoo last edited by
hi, there's some pretty easy things to do.
here is an exemple using the blob tracking with some effects applied on the body,
add to skeleton tracking where blob's heads are replaced by drawings of monster's heads...