Kinect for background subtraction

  • @DusX
    I'm sorry I can't find the release zip. I download the zip folder from github. Nothing there.
    Is there a way to calibrate what bit we take from the depth data? I'd like to do quite a detailed background subtraction.
    Many thanks

  • Ok found the release section. Sorry! don't really know how to use github.

  • Ran the app, it says it can't run because MSVCP140.dll is not installed

  • Tech Staff

    This is part of the Microsoft c++ redistributable...

  • Thanks guys.

    First of all @DusX, your app is a great tool for simple background subtraction. 
    Unfortunately, I'm doing something very specific, filming the person top down, and they're laying on the floor. 
    It seems like the BodyTrackImage cannot work in this setting, as the person is very close to the floor.
    Is there a way to set the kinect up so that it scans only a segment of the z axis? 
    @[jhoepffner](, I tried to implement your example using Lengeling's library and Spout, but can't seem to make it work. depthWidth cannot be resolved. 
    Any ideas?

  • import spout.*;
    import KinectPV2.*;
    PImage img;
    PGraphics canvas; 
    PGraphics canvas2;
    KinectPV2 kinect;
    Spout spout;
    Spout spout2;
    int thresholdH = 1200; // max distance (in mm)
    int thresholdL = 0; // min distance (in mm)
    void setup() {
      size(640, 360, P3D);
      kinect = new KinectPV2(this);
      canvas = createGraphics(1280, 720, P3D);
      canvas2 = createGraphics(1280, 720, P3D);
      img = loadImage("SpoutLogoMarble3.bmp");
        spout = new Spout(this);
          spout2 = new Spout(this);
      spout.createSender("rgb image");
    void draw()  { 
    background(0, 90, 100);
      canvas.image(kinect.getColorImage(), 0, 0, 1920, 1080);
    int [] rawData = kinect.getRawDepthData();
     //read kinect depth
      // draw the depth image in white between high and low limits
      for (int x = 0; x < kinect.depthWidth; x++) {
        for (int y = 0; y < kinect.depthHeight; y++) {
          int offset = x + y * kinect.depthWidth;
          int rawDepth = rawData[offset];
          int pix = x + y*img.width;
          if (rawDepth > thresholdL && rawDepth < thresholdH) {
            canvas2.pixels[pix] = color(255, 255, 255, 255); //draw white inside limits
          } else {
            canvas2.pixels[pix] = color(0, 0, 0, 0); //draw black outside limits

  • Tech Staff

    The app can be easily changed/added to, to supply the depth image. Perhaps using levels and then lumakey could provide something useful...

  • Hi

    This was a processing app I made a few years back. It created a black and white mask from the kinect depth mask and sent it out via syphon. I have not opened it for a while, but it worked fine and maybe there is something useful in there.
    @[eratatat]( there is a threshold value for the mask in the code attached so you can define the z depth start and end of the mask, I cant say if it is sensitive enough for somebody lying on floor. I remember mapping the threshold to osc, so I could tweak it from isadora, i.e move it back and forth.
    hope it helps you

  • I am pretty sure the depth threshold is independent of the openni or skeleton tracking that recognises humans and makes the mask. There are a few threads about this on the microsoft forums. You can get a better chance at tracking if you try to increase the contrast behind the person (like mat black background that also absorbs infra red light). The skeleton tracking that produces the mask uses all images together to find people.

    Also.too obvious probably, but make sure that the orientation of the camera means that even lying down you appear to be standing te right way up in the image, the systems have a very hard time if you are sideways or upside down.

  • Hi!

    thanks a lot for your help, I really appreciate it!
    @fubbi, I will check it out, this was exactly what I wanted to do and the thing is that I'm very new to processing and have very little time ahead of me to make this work.
    I've had the amazing contribution of DusX with his cool little app, he also played with the contrast internally so that you can get more detail. 
    I still intuitively feel with my little experience that minimising the scan distance will help the kinect get the right information.
    Worst case scenario, we're even considering having her move on a rotating stool, so that she's a bit detached from the floor.
    Will let you know how this is developing. It's been quite a saga. 
    But with all you guys helping out, it feels like a solution is close!