• Products
    • Isadora
    • Get It
    • ADD-ONS
    • IzzyCast
    • Get It
  • Forum
  • Help
  • Werkstatt
  • Newsletter
  • Impressum
  • Dsgvo
  • Press
  • Isadora
  • Get It
  • ADD-ONS
  • IzzyCast
  • Get It
  • Press
  • Dsgvo
  • Impressum
FORUM

Navigation

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Popular
    • Tags

    Kinect for background subtraction

    How To... ?
    7
    35
    19525
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • E
      eratatat last edited by

      Hi there,

      do you know if there is an alternative for MAC to Daniel Shiffman's library in processing?
      While testing the file [@jhoepffner](http://troikatronix.com/troikatronixforum/profile/jhoepffner) I got an error, processing cannot find the device (kinect v.2). 
      It seems like mac and kinect v.2 are not working well. 
      Any suggestions how to solve this?
      Thanks
      1 Reply Last reply Reply Quote 0
      • E
        eratatat last edited by

        Hi there,

        do you know if there is an alternative for MAC to Daniel Shiffman's library in processing?
        While testing the file [@jhoepffner](http://troikatronix.com/troikatronixforum/profile/jhoepffner) I got an error, processing cannot find the device (kinect v.2). 
        It seems like mac and kinect v.2 are not working well. 
        Any suggestions how to solve this?
        Thanks
        1 Reply Last reply Reply Quote 0
        • jhoepffner
          jhoepffner last edited by

          Hello,

          I you follow the procedure of libraries install (as described in Shiffman post, mainly freenect), it will work well.
          some things to note
          – you need a USB3 connection
          – sometime you need to unplug/replug the kinect
          – kinect V2 on mac gives you rawDeep info, color image, IR image but no skeleton infos.
          I used it on a an installation one month ago, Processing to Isadora to Processing, no problem.
          https://vimeo.com/194632092
          I do not know of other working kinect Processing library on mac.
          Millumin is able to have skeleton infos on mac but no way to output the info to Isadora…
          Hope that helps

          Jacques Hoepffner http://hoepffner.info
          GigaByte 550b / Ryzen 7 3800X / Ram 64 Go / RTX 3090 24 Go / SSD 2 To / raid0 32 To
          MBP 13' i5 2.6 Ghz 16 Go / Intel Iris / macOs 10.11.6 / izzy 2.6.1 + 3.0.3b2
          MBP 15' i7 2.6 Ghz 16 Go / GTX 650M 1Go/ MacOs10.13.3 / Izzy 2.6.1
          MSI GS65 i7 3.6 Ghz 32 Go / GTX 1070 8 Go / Windows 10 / Izzy 3.0.3b2

          1 Reply Last reply Reply Quote 0
          • DusX
            DusX Tech Staff last edited by

            I have made a few updates to this app:
            https://github.com/rwebber/kinect2share
            It now outputs the Hand state (is the right or left hand open or closed) as well as a number of other small things.

            The black and white mask (silhouette) is available via spout by default, as well, the 'green screen' type feed is there, incase you want live background subtraction.

            Troikatronix Technical Support

            • New Support Ticket Link: https://support.troikatronix.com/support/tickets/new
            • My Add-ons: https://troikatronix.com/add-ons/?u=dusx
            • Profession Services: https://support.troikatronix.com/support/solutions/articles/13000109444-professional-services

            Running: Win 11 64bit, i7, M.2 PCIe SSD's, 32gb DDR4, nVidia GTX 4070 | located in Ontario Canada.

            1 Reply Last reply Reply Quote 0
            • E
              eratatat last edited by

              @jhoepffner the problem I got with Shiffman's library does happen, I don't know why and how, this has developed into an unsolvable mystery.

              https://github.com/shiffman/OpenKinect-for-Processing/issues/80
              A couple of questions below, if you have the time, I would really appreciate your feedback.
              I will be using windows instead with the other two libraries, can I ask you, will your processing patch work with the other processing libraries? 
              Also, I will be compositing the final video on Resolume, so I was thinking of sending the image from Isadora to Resolume via Syphon. Is it possible to keep the alpha values intact when importing the syphon image to Resolume. 
              A huge thanks for your help. 
              1 Reply Last reply Reply Quote 0
              • E
                eratatat last edited by

                @DusX thanks for sharing. As I'm totally new to windows, I'll ask you something a bit dumb, what do I have to do to run this app? 

                1 Reply Last reply Reply Quote 0
                • jhoepffner
                  jhoepffner last edited by

                  @eratatat

                  – first question: I am more accustomed to MacOs, but I use windows on Bootcamp and I tried Kinect on W and M, with Processing and Touch Designer. For what I see it's much better implemented in Windows and with Windows Processing Kinect Library, you can obtain directly a mask to make background replacement. But you have to struggle with Spout who is less easy to use that Syphon (DirectX dependent, Graphic card dependent…).
                  For my actual usage (https://vimeo.com/194632092) I use Processing and Isadora on MacOs with Syphon to pass image in both ways).
                  – To use my patch with other library, you have to change all what is related to the library and I dont know how to obtain "getRawDepth"
                  – For the second question, if you use Windows, you dont use Syphon but Spout… Concerning Syphon, if you declare your background in Processing with alpha mask (as color(0, 0, 0, 0), you send alpha with Syphon but I dont know about Resolume and Spout…
                  In Millumin you have a quite easy way to obtain alpha mask from Kinect, give a try to the beta release.
                  Jacques

                  Jacques Hoepffner http://hoepffner.info
                  GigaByte 550b / Ryzen 7 3800X / Ram 64 Go / RTX 3090 24 Go / SSD 2 To / raid0 32 To
                  MBP 13' i5 2.6 Ghz 16 Go / Intel Iris / macOs 10.11.6 / izzy 2.6.1 + 3.0.3b2
                  MBP 15' i7 2.6 Ghz 16 Go / GTX 650M 1Go/ MacOs10.13.3 / Izzy 2.6.1
                  MSI GS65 i7 3.6 Ghz 32 Go / GTX 1070 8 Go / Windows 10 / Izzy 3.0.3b2

                  1 Reply Last reply Reply Quote 0
                  • DusX
                    DusX Tech Staff last edited by

                    @Eratatat Download the release (un zip it) And open the exe/application file. All required libraries are included. Note: both syphon and Spout support transfer of video with alpha. This image shows the alpha cutout video inside the openframeworks app and inside Isadora. https://camo.githubusercontent.com/bfeb9fa759cc5f984491d0daecd84cd878211521/68747470733a2f2f73636f6e74656e742d79797a312d312e78782e666263646e2e6e65742f762f74312e302d392f31343532303535355f3632383434313634373332363639395f323030393038363034333338343439333735375f6e2e706e673f6f683d3133626138383034353261333531393363386264666163646161643639363365266f653d3538414333354633

                    Troikatronix Technical Support

                    • New Support Ticket Link: https://support.troikatronix.com/support/tickets/new
                    • My Add-ons: https://troikatronix.com/add-ons/?u=dusx
                    • Profession Services: https://support.troikatronix.com/support/solutions/articles/13000109444-professional-services

                    Running: Win 11 64bit, i7, M.2 PCIe SSD's, 32gb DDR4, nVidia GTX 4070 | located in Ontario Canada.

                    1 Reply Last reply Reply Quote 0
                    • E
                      eratatat last edited by

                      @DusX
                      I'm sorry I can't find the release zip. I download the zip folder from github. Nothing there.
                      Is there a way to calibrate what bit we take from the depth data? I'd like to do quite a detailed background subtraction.
                      Many thanks

                      1 Reply Last reply Reply Quote 0
                      • E
                        eratatat last edited by

                        Ok found the release section. Sorry! don't really know how to use github.

                        1 Reply Last reply Reply Quote 0
                        • E
                          eratatat last edited by

                          Ran the app, it says it can't run because MSVCP140.dll is not installed

                          1 Reply Last reply Reply Quote 0
                          • DusX
                            DusX Tech Staff last edited by

                            This is part of the Microsoft c++ redistributable... 

                            https://www.microsoft.com/en-ca/download/details.aspx?id=48145

                            Troikatronix Technical Support

                            • New Support Ticket Link: https://support.troikatronix.com/support/tickets/new
                            • My Add-ons: https://troikatronix.com/add-ons/?u=dusx
                            • Profession Services: https://support.troikatronix.com/support/solutions/articles/13000109444-professional-services

                            Running: Win 11 64bit, i7, M.2 PCIe SSD's, 32gb DDR4, nVidia GTX 4070 | located in Ontario Canada.

                            1 Reply Last reply Reply Quote 0
                            • E
                              eratatat last edited by

                              Thanks guys.

                              First of all @DusX, your app is a great tool for simple background subtraction. 
                              Unfortunately, I'm doing something very specific, filming the person top down, and they're laying on the floor. 
                              It seems like the BodyTrackImage cannot work in this setting, as the person is very close to the floor.
                              Is there a way to set the kinect up so that it scans only a segment of the z axis? 
                              @[jhoepffner](http://troikatronix.com/troikatronixforum/profile/15/jhoepffner), I tried to implement your example using Lengeling's library and Spout, but can't seem to make it work. depthWidth cannot be resolved. 
                              Any ideas?
                              Thanks
                              ET
                              1 Reply Last reply Reply Quote 0
                              • E
                                eratatat last edited by

                                import spout.*;
                                import KinectPV2.*;
                                PImage img;
                                PGraphics canvas; 
                                PGraphics canvas2;
                                KinectPV2 kinect;
                                Spout spout;
                                Spout spout2;
                                int thresholdH = 1200; // max distance (in mm)
                                int thresholdL = 0; // min distance (in mm)
                                void setup() {
                                  size(640, 360, P3D);
                                  textureMode(NORMAL);
                                  kinect = new KinectPV2(this);
                                  kinect.enableDepthImg(true);
                                  kinect.enableBodyTrackImg(true);
                                  kinect.enableColorImg(true);
                                  kinect.init();
                                  canvas = createGraphics(1280, 720, P3D);
                                  canvas2 = createGraphics(1280, 720, P3D);
                                  img = loadImage("SpoutLogoMarble3.bmp");
                                    spout = new Spout(this);
                                      spout2 = new Spout(this);
                                  spout.createSender("rgb image");
                                  spout2.createSender("mask");
                                  
                                } 
                                void draw()  { 
                                background(0, 90, 100);
                                noStroke();
                                canvas.beginDraw();
                                  canvas.image(kinect.getColorImage(), 0, 0, 1920, 1080);
                                      canvas.endDraw(); 
                                      
                                          spout.sendTexture(canvas);
                                          
                                int [] rawData = kinect.getRawDepthData();
                                 //read kinect depth
                                  canvas2.beginDraw();
                                  canvas2.loadPixels();
                                  // draw the depth image in white between high and low limits
                                  for (int x = 0; x < kinect.depthWidth; x++) {
                                    for (int y = 0; y < kinect.depthHeight; y++) {
                                      int offset = x + y * kinect.depthWidth;
                                      int rawDepth = rawData[offset];
                                      int pix = x + y*img.width;
                                      if (rawDepth > thresholdL && rawDepth < thresholdH) {
                                        canvas2.pixels[pix] = color(255, 255, 255, 255); //draw white inside limits
                                      } else {
                                        canvas2.pixels[pix] = color(0, 0, 0, 0); //draw black outside limits
                                      }
                                    }
                                  }
                                  canvas2.updatePixels();
                                  canvas2.endDraw();
                                    spout2.sendTexture(canvas2);
                                    
                                }
                                1 Reply Last reply Reply Quote 0
                                • DusX
                                  DusX Tech Staff last edited by

                                  The app can be easily changed/added to, to supply the depth image. Perhaps using levels and then lumakey could provide something useful...

                                  Troikatronix Technical Support

                                  • New Support Ticket Link: https://support.troikatronix.com/support/tickets/new
                                  • My Add-ons: https://troikatronix.com/add-ons/?u=dusx
                                  • Profession Services: https://support.troikatronix.com/support/solutions/articles/13000109444-professional-services

                                  Running: Win 11 64bit, i7, M.2 PCIe SSD's, 32gb DDR4, nVidia GTX 4070 | located in Ontario Canada.

                                  1 Reply Last reply Reply Quote 0
                                  • fubbi
                                    fubbi last edited by

                                    Hi

                                    This was a processing app I made a few years back. It created a black and white mask from the kinect depth mask and sent it out via syphon. I have not opened it for a while, but it worked fine and maybe there is something useful in there.
                                    @[eratatat](http://troikatronix.com/troikatronixforum/profile/6259/eratatat) there is a threshold value for the mask in the code attached so you can define the z depth start and end of the mask, I cant say if it is sensitive enough for somebody lying on floor. I remember mapping the threshold to osc, so I could tweak it from isadora, i.e move it back and forth.
                                    hope it helps you
                                    yours
                                    fubbi

                                    7630f9-trapping_v12.pde.zip

                                    Mac M2 Ultra, 64gb — Berlin

                                    1 Reply Last reply Reply Quote 0
                                    • Fred
                                      Fred last edited by

                                      I am pretty sure the depth threshold is independent of the openni or skeleton tracking that recognises humans and makes the mask. There are a few threads about this on the microsoft forums. You can get a better chance at tracking if you try to increase the contrast behind the person (like mat black background that also absorbs infra red light). The skeleton tracking that produces the mask uses all images together to find people.

                                      Also.too obvious probably, but make sure that the orientation of the camera means that even lying down you appear to be standing te right way up in the image, the systems have a very hard time if you are sideways or upside down.
                                      Fred

                                      http://www.fredrodrigues.net/
                                      https://github.com/fred-dev
                                      OSX 13.6.4 (22G513) MBP 2019 16" 2.3 GHz 8-Core i9, Radeon Pro 5500M 8 GB, 32g RAM
                                      Windows 10 7700K, GTX 1080ti, 32g RAM, 2tb raided SSD

                                      1 Reply Last reply Reply Quote 0
                                      • E
                                        eratatat last edited by

                                        Hi!

                                        thanks a lot for your help, I really appreciate it!
                                        @fubbi, I will check it out, this was exactly what I wanted to do and the thing is that I'm very new to processing and have very little time ahead of me to make this work.
                                        I've had the amazing contribution of DusX with his cool little app, he also played with the contrast internally so that you can get more detail. 
                                        I still intuitively feel with my little experience that minimising the scan distance will help the kinect get the right information.
                                        Worst case scenario, we're even considering having her move on a rotating stool, so that she's a bit detached from the floor.
                                        Will let you know how this is developing. It's been quite a saga. 
                                        But with all you guys helping out, it feels like a solution is close!
                                        Thanks
                                        1 Reply Last reply Reply Quote 0
                                        • First post
                                          Last post