Multiple Kinects Best Practices
RWillats last edited by
I'm in the early staging process for a performance and have discovered that one Kinect won't cover the entire width of the performance space. I have access to a couple more Kinects and some mac minis but am wondering: does anyone have tips on pulling the data from three Kinects? Should I be trying to get one device to read the data from all three or sending the data between separate devices? I don't have experience with either and would appreciate any guidance.
I could be wrong, but as far as I understand it, they each need to gather their own data because the depth camera aspect works by figuring out where things are by bouncing IR light from the Kinect, onto an object, and back into the two Kinect lenses in an IR-light version of echolocation and triangulation (but with light instead of sound). It's very important that the IR throws of each Kinect have little to no overlap because they will confuse/cause interference with each other. The more overlap they have, the muddier your data will be.
Better, science-y explanation here for people with larger attention spans and bigger brains than me: https://www.scitepress.org/papers/2014/47364/47364.pdf
The way I've heard of people overcoming this is by attaching a motor with an off-center weight to each Kinect and having each motor make each Kinect oscillate at a different frequency: https://www.researchgate.net/publication/261283070_Reducing_Interference_Between_Multiple_Structured_Light_Depth_Sensors_Using_Motion
does anyone have tips on pulling the data from three Kinects
The way I have seen this done in the past is by 'stitching' the point clouds together. This is not something that is currently possible in Isadora. This thread is an interesting discussion of this same problem, where a few approaches are offered (using C++ in Openframeworks).
dbini last edited by
it depends on what data you need to get from your 3 kinects. DusX, i believe, is referencing volumetric capture, making a point cloud image of a person by combining the depth images from 3 cameras, therefore avoiding the dark side problem of using a single kinect, and yes, it requires some pretty heavy logic to get it to work, that is likely beyond Isadora's capabilities.
I think Rory is looking at covering a larger stage area. this is entirely within the capacity of Isadora. Its important to factor in the nature of a camera, whether it is a 2D or 3D camera - the image it receives is a cone shape. Kinects are designed to look for human forms that walk on a particular plane - the floor - so, essentially each kinect will see a triangular wedge of the stage. if you arrange 3 kinects across the front of the stage, you will get blind spots between these wedges. 3 kinects in a cluster will give you a massive angle, but no extra depth.
Processing the data is going to involve some maths, but - depending on what data you're tracking, and how it affects your environment, it can be pretty straightforward and logical. I would suggest running a separate device for each kinect and networking them together to bring just the data you need into one machine for processing. The OpenNI tracker can receive loads of data from the kinect and you likely won't need all of it, so filter out the data you want and send it via OSC or Broadcaster/Listener system to your main processing machine.
RWillats last edited by
All of these responses have been very helpful, thanks!
In this use case, I'm lucky I don't have to worry too much about overlap issues. It sounds like a separate device for each Kinect and then using OSC to get the OpenNI data between the devices might be my best bet. I gather there isn't a way to have one device read the outputs of three Kinects. Does this mean I'll need three Isadora licenses to get the OpenNI-Skeleton tracker from each Kinect?
I gather there isn't a way to have one device read the outputs of three Kinects.
Active USB extension cables would let you connect multiple Kinects to the same computer (which you'd have onstage or backstage, then use OSC to send relevant data to the show computer in the control booth or wherever). I'd advise you to avoid putting multiple Kinects on the same USB hub if you can help it.
Does this mean I'll need three Isadora licenses to get the OpenNI-Skeleton tracker from each Kinect?
Any Isadora file can be used without a license in demo mode. In demo mode (without a license) every feature in Isadora is fully enabled except for saving, so you'd just want to build the patches on a computer with a license before you deployed them. So you could have three computers, each with one Kinect, all sending data somewhere via OSC, without any Isadora licenses. Even if you scaled up to 100 computers and 100 Kinects, you could create the Isadora file you need on a computer with a license, copy the file to each of the 100 computers, and then run the Kinect Isadora file on all 100 computers without an Isadora license on any of them.
That being said, it'll be easier to have a license on each computer while you're still building and adjusting the files here and there then, for the performance(s), where you don't have any changes to make or save, you could just run all three computers with Isadora in demo mode. There are a couple of options for this:
- Get three Isadora 7-Day Rental licenses so that each of the three computers could save for a week while you're building.
- If you need more than a week, you could get three Isadora Monthly Subscriptions, then cancel the subscription. This will give you the ability to put Isadora on the three computers for a full month (and then because you canceled the subscription you wouldn't be charged again for the next month).
I'd advise you to avoid putting multiple Kinects on the same USB hub
I just want to reinforce this, generally one Kinect per USB host/bus in your system. On a laptop, often you will have one Bus on the left side and one on the right (maybe the rear), but it is unlikely you have more than 2 on a laptop. This is most apparent with Kinect V2 which is USB 3 only and really needs the bandwidth.