Kinect for background subtraction
-
Or perhaps, use a blend more directly in processing, I've tried it but can't seem to make it work. Sorry, really new to processing, so haven't figured out the logic yet.
thanks! -
If anyone would like to try/test this
https://github.com/rwebber/kinect2sharePlease do and let me know how it goes. -
Thanks Ryan. I am going to ask the dumb question here, but when I download something from github it comes a zip. When I unzip it its always just lots of files and code like documents. Am I doing something wrong; or should this be an application ".app" ?
-
The 'release' zip has the app 'exe'. This is windows only.
The izzy sample file is in the code files... I need to update that... i forgot to add it to the release. -
Thats ok. Just wanted to check.
-
Hi there,
do you know if there is an alternative for MAC to Daniel Shiffman's library in processing?While testing the file [@jhoepffner](http://troikatronix.com/troikatronixforum/profile/jhoepffner) I got an error, processing cannot find the device (kinect v.2).It seems like mac and kinect v.2 are not working well.Any suggestions how to solve this?Thanks -
Hi there,
do you know if there is an alternative for MAC to Daniel Shiffman's library in processing?While testing the file [@jhoepffner](http://troikatronix.com/troikatronixforum/profile/jhoepffner) I got an error, processing cannot find the device (kinect v.2).It seems like mac and kinect v.2 are not working well.Any suggestions how to solve this?Thanks -
Hello,
I you follow the procedure of libraries install (as described in Shiffman post, mainly freenect), it will work well.some things to note– you need a USB3 connection– sometime you need to unplug/replug the kinect– kinect V2 on mac gives you rawDeep info, color image, IR image but no skeleton infos.I used it on a an installation one month ago, Processing to Isadora to Processing, no problem.https://vimeo.com/194632092I do not know of other working kinect Processing library on mac.Millumin is able to have skeleton infos on mac but no way to output the info to Isadora…Hope that helps -
I have made a few updates to this app:
https://github.com/rwebber/kinect2share
It now outputs the Hand state (is the right or left hand open or closed) as well as a number of other small things.The black and white mask (silhouette) is available via spout by default, as well, the 'green screen' type feed is there, incase you want live background subtraction. -
@jhoepffner the problem I got with Shiffman's library does happen, I don't know why and how, this has developed into an unsolvable mystery.
https://github.com/shiffman/OpenKinect-for-Processing/issues/80A couple of questions below, if you have the time, I would really appreciate your feedback.I will be using windows instead with the other two libraries, can I ask you, will your processing patch work with the other processing libraries?Also, I will be compositing the final video on Resolume, so I was thinking of sending the image from Isadora to Resolume via Syphon. Is it possible to keep the alpha values intact when importing the syphon image to Resolume.A huge thanks for your help. -
@DusX thanks for sharing. As I'm totally new to windows, I'll ask you something a bit dumb, what do I have to do to run this app?
-
– first question: I am more accustomed to MacOs, but I use windows on Bootcamp and I tried Kinect on W and M, with Processing and Touch Designer. For what I see it's much better implemented in Windows and with Windows Processing Kinect Library, you can obtain directly a mask to make background replacement. But you have to struggle with Spout who is less easy to use that Syphon (DirectX dependent, Graphic card dependent…).For my actual usage (https://vimeo.com/194632092) I use Processing and Isadora on MacOs with Syphon to pass image in both ways).– To use my patch with other library, you have to change all what is related to the library and I dont know how to obtain "getRawDepth"– For the second question, if you use Windows, you dont use Syphon but Spout… Concerning Syphon, if you declare your background in Processing with alpha mask (as color(0, 0, 0, 0), you send alpha with Syphon but I dont know about Resolume and Spout…In Millumin you have a quite easy way to obtain alpha mask from Kinect, give a try to the beta release.Jacques -
@Eratatat Download the release (un zip it) And open the exe/application file. All required libraries are included. Note: both syphon and Spout support transfer of video with alpha. This image shows the alpha cutout video inside the openframeworks app and inside Isadora. https://camo.githubusercontent.com/bfeb9fa759cc5f984491d0daecd84cd878211521/68747470733a2f2f73636f6e74656e742d79797a312d312e78782e666263646e2e6e65742f762f74312e302d392f31343532303535355f3632383434313634373332363639395f323030393038363034333338343439333735375f6e2e706e673f6f683d3133626138383034353261333531393363386264666163646161643639363365266f653d3538414333354633
-
@DusX
I'm sorry I can't find the release zip. I download the zip folder from github. Nothing there.
Is there a way to calibrate what bit we take from the depth data? I'd like to do quite a detailed background subtraction.
Many thanks
-
Ok found the release section. Sorry! don't really know how to use github.
-
Ran the app, it says it can't run because MSVCP140.dll is not installed
-
This is part of the Microsoft c++ redistributable...
https://www.microsoft.com/en-ca/download/details.aspx?id=48145 -
Thanks guys.
First of all @DusX, your app is a great tool for simple background subtraction.Unfortunately, I'm doing something very specific, filming the person top down, and they're laying on the floor.It seems like the BodyTrackImage cannot work in this setting, as the person is very close to the floor.Is there a way to set the kinect up so that it scans only a segment of the z axis?@[jhoepffner](http://troikatronix.com/troikatronixforum/profile/15/jhoepffner), I tried to implement your example using Lengeling's library and Spout, but can't seem to make it work. depthWidth cannot be resolved.Any ideas?ThanksET -
import spout.*;import KinectPV2.*;PImage img;PGraphics canvas;PGraphics canvas2;KinectPV2 kinect;Spout spout;Spout spout2;int thresholdH = 1200; // max distance (in mm)int thresholdL = 0; // min distance (in mm)void setup() {size(640, 360, P3D);textureMode(NORMAL);kinect = new KinectPV2(this);kinect.enableDepthImg(true);kinect.enableBodyTrackImg(true);kinect.enableColorImg(true);kinect.init();canvas = createGraphics(1280, 720, P3D);canvas2 = createGraphics(1280, 720, P3D);img = loadImage("SpoutLogoMarble3.bmp");spout = new Spout(this);spout2 = new Spout(this);spout.createSender("rgb image");spout2.createSender("mask");}void draw() {background(0, 90, 100);noStroke();canvas.beginDraw();canvas.image(kinect.getColorImage(), 0, 0, 1920, 1080);canvas.endDraw();spout.sendTexture(canvas);int [] rawData = kinect.getRawDepthData();//read kinect depthcanvas2.beginDraw();canvas2.loadPixels();// draw the depth image in white between high and low limitsfor (int x = 0; x < kinect.depthWidth; x++) {for (int y = 0; y < kinect.depthHeight; y++) {int offset = x + y * kinect.depthWidth;int rawDepth = rawData[offset];int pix = x + y*img.width;if (rawDepth > thresholdL && rawDepth < thresholdH) {canvas2.pixels[pix] = color(255, 255, 255, 255); //draw white inside limits} else {canvas2.pixels[pix] = color(0, 0, 0, 0); //draw black outside limits}}}canvas2.updatePixels();canvas2.endDraw();spout2.sendTexture(canvas2);} -
The app can be easily changed/added to, to supply the depth image. Perhaps using levels and then lumakey could provide something useful...