Kinect for background subtraction
-
@Eratatat Download the release (un zip it) And open the exe/application file. All required libraries are included. Note: both syphon and Spout support transfer of video with alpha. This image shows the alpha cutout video inside the openframeworks app and inside Isadora. https://camo.githubusercontent.com/bfeb9fa759cc5f984491d0daecd84cd878211521/68747470733a2f2f73636f6e74656e742d79797a312d312e78782e666263646e2e6e65742f762f74312e302d392f31343532303535355f3632383434313634373332363639395f323030393038363034333338343439333735375f6e2e706e673f6f683d3133626138383034353261333531393363386264666163646161643639363365266f653d3538414333354633
-
@DusX
I'm sorry I can't find the release zip. I download the zip folder from github. Nothing there.
Is there a way to calibrate what bit we take from the depth data? I'd like to do quite a detailed background subtraction.
Many thanks
-
Ok found the release section. Sorry! don't really know how to use github.
-
Ran the app, it says it can't run because MSVCP140.dll is not installed
-
This is part of the Microsoft c++ redistributable...
https://www.microsoft.com/en-ca/download/details.aspx?id=48145 -
Thanks guys.
First of all @DusX, your app is a great tool for simple background subtraction.Unfortunately, I'm doing something very specific, filming the person top down, and they're laying on the floor.It seems like the BodyTrackImage cannot work in this setting, as the person is very close to the floor.Is there a way to set the kinect up so that it scans only a segment of the z axis?@[jhoepffner](http://troikatronix.com/troikatronixforum/profile/15/jhoepffner), I tried to implement your example using Lengeling's library and Spout, but can't seem to make it work. depthWidth cannot be resolved.Any ideas?ThanksET -
import spout.*;import KinectPV2.*;PImage img;PGraphics canvas;PGraphics canvas2;KinectPV2 kinect;Spout spout;Spout spout2;int thresholdH = 1200; // max distance (in mm)int thresholdL = 0; // min distance (in mm)void setup() {size(640, 360, P3D);textureMode(NORMAL);kinect = new KinectPV2(this);kinect.enableDepthImg(true);kinect.enableBodyTrackImg(true);kinect.enableColorImg(true);kinect.init();canvas = createGraphics(1280, 720, P3D);canvas2 = createGraphics(1280, 720, P3D);img = loadImage("SpoutLogoMarble3.bmp");spout = new Spout(this);spout2 = new Spout(this);spout.createSender("rgb image");spout2.createSender("mask");}void draw() {background(0, 90, 100);noStroke();canvas.beginDraw();canvas.image(kinect.getColorImage(), 0, 0, 1920, 1080);canvas.endDraw();spout.sendTexture(canvas);int [] rawData = kinect.getRawDepthData();//read kinect depthcanvas2.beginDraw();canvas2.loadPixels();// draw the depth image in white between high and low limitsfor (int x = 0; x < kinect.depthWidth; x++) {for (int y = 0; y < kinect.depthHeight; y++) {int offset = x + y * kinect.depthWidth;int rawDepth = rawData[offset];int pix = x + y*img.width;if (rawDepth > thresholdL && rawDepth < thresholdH) {canvas2.pixels[pix] = color(255, 255, 255, 255); //draw white inside limits} else {canvas2.pixels[pix] = color(0, 0, 0, 0); //draw black outside limits}}}canvas2.updatePixels();canvas2.endDraw();spout2.sendTexture(canvas2);} -
The app can be easily changed/added to, to supply the depth image. Perhaps using levels and then lumakey could provide something useful...
-
Hi
This was a processing app I made a few years back. It created a black and white mask from the kinect depth mask and sent it out via syphon. I have not opened it for a while, but it worked fine and maybe there is something useful in there.@[eratatat](http://troikatronix.com/troikatronixforum/profile/6259/eratatat) there is a threshold value for the mask in the code attached so you can define the z depth start and end of the mask, I cant say if it is sensitive enough for somebody lying on floor. I remember mapping the threshold to osc, so I could tweak it from isadora, i.e move it back and forth.hope it helps youyoursfubbi -
I am pretty sure the depth threshold is independent of the openni or skeleton tracking that recognises humans and makes the mask. There are a few threads about this on the microsoft forums. You can get a better chance at tracking if you try to increase the contrast behind the person (like mat black background that also absorbs infra red light). The skeleton tracking that produces the mask uses all images together to find people.
Also.too obvious probably, but make sure that the orientation of the camera means that even lying down you appear to be standing te right way up in the image, the systems have a very hard time if you are sideways or upside down.Fred -
Hi!
thanks a lot for your help, I really appreciate it!@fubbi, I will check it out, this was exactly what I wanted to do and the thing is that I'm very new to processing and have very little time ahead of me to make this work.I've had the amazing contribution of DusX with his cool little app, he also played with the contrast internally so that you can get more detail.I still intuitively feel with my little experience that minimising the scan distance will help the kinect get the right information.Worst case scenario, we're even considering having her move on a rotating stool, so that she's a bit detached from the floor.Will let you know how this is developing. It's been quite a saga.But with all you guys helping out, it feels like a solution is close!Thanks