• Products
    • Isadora
    • Get It
    • ADD-ONS
    • IzzyCast
    • Get It
  • Forum
  • Help
  • Werkstatt
  • Newsletter
  • Impressum
  • Dsgvo
  • Press
  • Isadora
  • Get It
  • ADD-ONS
  • IzzyCast
  • Get It
  • Press
  • Dsgvo
  • Impressum

Navigation

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Popular
    • Tags

    Motion Tracking through Isadora

    How To... ?
    7
    20
    6617
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Maxime
      Maxime last edited by

      I've been working a lot on these questions for a project for a theatre play lately.
      I'm joining @Marci in the fact that the skeleton detection doesn't have to be the best solution: it requires a setup phase which is quite painful in the middle of a performance (but if you want to go that way I would recommend having a look at synapse).
      I've been using a processing sketch with OpenKinect lib to get the depth image of the kinect and send it to Isadora through Syphon, where I process the blob detection. It might be not the most efficient solution, but it's working easily and good enough for me. I manage to have more than 7 meters wide image. But the further you go the more unstable is the result...
      I'd be curious to know which solution you go for in the end!
      @Marci Your particle video is really nice, I'd be curious to see how you do that as well :)

      MBP 15" end 2013 / i7 2,3Ghz /NVIDIA GT 750M 2048Mo / 16Go RAM / OS 10.10.3
      Razer Blade 15" 2018 /i7/ GTX1070/ 16Go RAM

      1 Reply Last reply Reply Quote 0
      • Marci
        Marci last edited by

        Been in bed ill since Friday evening so not had chance to tweak it up for sharing, but as soon as it's done I'll be posting the Processing src & Izzy patches to the Dropbox share I linked to above... :)

        rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
        Warning: autistic - may come across rather blunt and lacking in humour!

        1 Reply Last reply Reply Quote 0
        • Marci
          Marci last edited by

          NB: This is Mac OSX Only!
          Grab Processing 2 from here: [Mac OS X](http://download.processing.org/processing-2.2.1-macosx.zip)
          (If you already have Processing 3 installed, just rename it in your apps folder to Processing 3\. Processing 2 is more stable for Kinect & Syphon purposes)
          Open Processing 2, head to Sketch > Import Library > Add Library.
          Find and install...
          • Syphon
          • oscP5
          • Open Kinect for Processing
          You'll also need the flob library... download libraries.zip (attached) and extract and drop into your processing folder.
          Now, plug in your Kinect, download and extract Playlist_Sketch.zip, and open the file Playlist_Sketch.pde in Processing 2, and hit the Play button top left, and click on the output window.
          You'll start up on KinectFabric. Hold an arm out in front of you. Press 'D' to see the depth image, use up and down arrows to move the depth threshold until your hand is red. Press 'D' again to turn off the depth image and play with the curtain.
          Now, press 8\. You'll flip to Particles. Press 'D' again to reveal the depth image. Sit steady with a hand outstretched, and particles should be generated and flock to it. Move it around and the flock will follow. Stand up and move around slowly... the flock will flock to your entire blob.
          Click onscreen with the mouse and the particles are released to roam free. Click again and they'll head back to your blob silhouette. Press 'D' again to turn off depth image and play away! Again, up & down keys adjust depth threshold.
          Press 7\. Now you're viewing the particle text sketch. Either sit and wait, or click onscreen with your mouse to initiate the first particle burst. From then on it's timed. If you look at the source code (07_particlestext tab), and scroll all the way to the bottom, you'll find an array of words. It just runs through those in a loop, randomising position... have a look at the code for how these are rendered as images then scanned pixel-by-pixel...
          There are other interactive sketches on keys 0 thru 8.
          q & a tilt your Kinect up & down.
          g toggles Gravity on and off for the Curtain (init sketch / key 5)
          t toggles from grid to chains for the Curtain (init sketch / key 5)
          On the 6 key you'll find the multi blob sketch.
          On the 4 key is a triangulated blob visualiser.
          On the 3 key, a hand activated torch (if this causes a crash, make a folder called ImageTorch in your processing folder, and download the contents of the same folder from yonder: https://www.dropbox.com/home/Sensadora/Processing%202.x and stick it in there. Then it should work)
          On the 2 key is a mouse activated sketch. Click and drag.
          On the 1 key is another mouse activate sketch... just drag over the particles.
          On the 0 key is a particle-based streamer sketch which should follow your hand/blob around the screen also.
          https://youtu.be/O5cKPxaaufk
          Have a play! Look at the source. Much is commented. Majority are sketches from OpenProcessing.org - I've just tweaked them to work in this context / add Kinect / Syphon / OSC / embed them into a single controllable sketch.
          Scroll to the bottom of the first tab and look for the if(key== statements to see what keys do what.
          Recommend you listen for OSC streams in Isadora and fire up a Syphon listener as quite a lot gets output for you to integrate depending which sketch you're using.
          PS: This is a work in progress... by no means complete.

          75d3cf-playlist_sketch.zip 125c8a-libraries.zip

          rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
          Warning: autistic - may come across rather blunt and lacking in humour!

          1 Reply Last reply Reply Quote 0
          • Maxime
            Maxime last edited by

            Thanks a lot for that @Marci ! I'm gonna have a serious look on it tomorrow!

            Glad you're not sick anymore! :)

            MBP 15" end 2013 / i7 2,3Ghz /NVIDIA GT 750M 2048Mo / 16Go RAM / OS 10.10.3
            Razer Blade 15" 2018 /i7/ GTX1070/ 16Go RAM

            1 Reply Last reply Reply Quote 0
            • Marci
              Marci last edited by

              If you look at how sketch 7 works, it creates a PGraphic element with a white background, and renders the text on that rather than displaying on screen. We then iterate over the x and y checking each pixel, looking for a black one (or possibly red - can't remember what state I left the patch in). Wherever a matching pixel is found, a particle is added to an array with the x/y position - the particle is then released from 0,-100, with acceleration and velocity, and is attracted towards it's pre-destined x,y co-ords... it overshoots and spirals back towards losing velocity as it goes until it flocks around it's spot.

              Exactly the same principle can be used on the raw depth image from a kinect (sketch 8). Whilst going over x and y, we check the raw depth of that pixel, and if it falls between a defined range (our minimum and maximum depth thresholds) we color it red. Using the same technique as for the particle text, we add a particle to an array for any matching red, and launch them from 0,-100.
              The trick is keeping the particles in check, so when we're dealing with large blobs such as in scene 8, we have a skip variable, so we skip and only look at every 3rd pixel. Lowering the resolution essentially. Otherwise you'll find CPU gets stressed and up spin all your fans. We give pixels a lifespan, and once their counter (in frames) has reached 0, they're marked to be removed the next time they exit the boundaries of the screen. Again, helps keep the load in check.
              There are all sorts of optimisations that could be done. These two demonstrate depth layer tracking to emulate motion tracking in a not-really-efficient way.
              Patch 6 is the multi blob sketch that I need to work with next.
              At the moment this is all limited to using a single Kinect feed (i.e.: just the depth feed), mainly because I'm on a Retina Macbook with USB3\. The USB library that is used in SimpleOpenNI and OpenKinect has issues with isochronous packets and tends to crash out if you try using more than one feed (e.g.: to do a coloured particle cloud, textured from the RGB camera image, you'd need both RGB and Depth feeds - on a USB3 macbook this is likely to crash out rapidly). This is why a lot of other Kinect middleware out there can be flakey for some users (indeed, many of them are actually processing sketches compiled out as a standalone executable)... 
              If you have an older Macbook WITHOUT USB3 (non-retina MBPro 17" with AMD GPU for instance) - then the sketches above could all have the RGB feed enabled and particles textured from that... or we could pull in the user feed and handle skeleton alongside blob detection etc. 
              A decent powered USB2 hub may get round the issues for retina MBPro users, but I haven't got one so can't confirm that.

              rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
              Warning: autistic - may come across rather blunt and lacking in humour!

              1 Reply Last reply Reply Quote 0
              • Bill Cottman
                Bill Cottman last edited by

                Thank you Marci. It's going to take some time for me get to speed on this but I and thankful for the path you've provided.

                http://www.BillCottman.com : Isadora3.0.8f09 with MBP OS X 10.11.6 in Minneapolis, MN

                1 Reply Last reply Reply Quote 0
                • Maxime
                  Maxime last edited by

                  @Marci thanks for all this, it's going to take me a while to go through these sketches...

                  But It's very close to something I want to do and I feel like it's a really good hint for me.
                  I didn't know for that USB3 issue since i'm (now gladly) owning a old MBP with USB2
                  thanks again!

                  MBP 15" end 2013 / i7 2,3Ghz /NVIDIA GT 750M 2048Mo / 16Go RAM / OS 10.10.3
                  Razer Blade 15" 2018 /i7/ GTX1070/ 16Go RAM

                  1 Reply Last reply Reply Quote 0
                  • chiligoldman
                    chiligoldman last edited by

                    Hi Marci.

                    Thank you for sharing your tutorial with code examples. It's so informative and is something I've been trying to work out for a while. I'm having some stability issues and sometimes get errors running Processing that sometimes results in a crash. I wonder if it could do with versions. 
                    I'm curious as to which Kinect sensor you are using. Do you think that could have anything to do with it? Mine is the first generation.
                    Thanks again! I appreciate it.
                    1 Reply Last reply Reply Quote 0
                    • Marci
                      Marci last edited by

                      Me? Kinect for xBox360... 1417 model (x2). The only issue with these is if running over USB3 and using multiple feeds as noted above.

                      rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
                      Warning: autistic - may come across rather blunt and lacking in humour!

                      1 Reply Last reply Reply Quote 0
                      • gapworks
                        gapworks last edited by

                        @ Marci

                        finally got it working but i can't get it into isadora :( Processing does not appear in the syphon receiver. 

                        Running MBP2017 / Ventura Osx 13.6.7 / 16 GB 2133 MHz LPDDR3 / Intel HD Graphics 630 1536 MB / Latest Isadora Version / www.gapworks.at / located in Vienna Austria

                        1 Reply Last reply Reply Quote 0
                        • gapworks
                          gapworks last edited by

                          @ skulpture

                          finally getting to motion tracking/ kinect... ect. i looked into your tutorials today. I even downloaded Synapse but all i get is a Black Preview Image. Could it be that is it not working in "Yosemite"?
                          Or is it the MaxPatch that is missing (also in your tutorial)
                          thanks for any further Information
                          p.

                          Running MBP2017 / Ventura Osx 13.6.7 / 16 GB 2133 MHz LPDDR3 / Intel HD Graphics 630 1536 MB / Latest Isadora Version / www.gapworks.at / located in Vienna Austria

                          1 Reply Last reply Reply Quote 0
                          • Skulpture
                            Skulpture Izzy Guru last edited by

                            Hi. Yes this is an old tutorial I'm afraid. I will have another look into it for you as soon as I can.

                            Graham Thorne | www.grahamthorne.co.uk
                            RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
                            RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
                            RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

                            1 Reply Last reply Reply Quote 0
                            • Marci
                              Marci last edited by

                              "finally got it working but i can't get it into isadora :( Processing does not appear in the syphon receiver."

                              Hit CMD-SHIFT-O in processing and it should give you the Examples browser... Expand Contributed Libraries, find the Syphon library's examples, run them and check they show up in Isadora. 

                              rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
                              Warning: autistic - may come across rather blunt and lacking in humour!

                              1 Reply Last reply Reply Quote 0
                              • gapworks
                                gapworks last edited by

                                @marci yes the examples do show up!  see attached screenshot.

                                best

                                084b9c-screen-shot-2015-12-14-at-16.52.01.png

                                Running MBP2017 / Ventura Osx 13.6.7 / 16 GB 2133 MHz LPDDR3 / Intel HD Graphics 630 1536 MB / Latest Isadora Version / www.gapworks.at / located in Vienna Austria

                                1 Reply Last reply Reply Quote 0
                                • Skulpture
                                  Skulpture Izzy Guru last edited by

                                  @gapworks

                                  Worked fine for me. I did have to do the 'hands up' pose and then it sprung to life. 

                                  55ab83-screen-shot-2015-12-15-at-09.10.31.png

                                  Graham Thorne | www.grahamthorne.co.uk
                                  RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
                                  RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
                                  RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

                                  1 Reply Last reply Reply Quote 0
                                  • gapworks
                                    gapworks last edited by

                                    @skulpture yes after moving ( more jumping) in front of the kinect it woke up after all. i even managed to get the skeleton working, but getting to my laptop ( meaning closer to the sensor) it really freaked out :( so i might end up using nimate...

                                    Running MBP2017 / Ventura Osx 13.6.7 / 16 GB 2133 MHz LPDDR3 / Intel HD Graphics 630 1536 MB / Latest Isadora Version / www.gapworks.at / located in Vienna Austria

                                    1 Reply Last reply Reply Quote 0
                                    • Marci
                                      Marci last edited by

                                      Anything based on SimpleOpenNI (including NIMate) takes an age to initialise skeleton properly unless it can see full height (feet to head) in shot. When you get too close, frequently skeleton legs start to go whappy - skeleton doesn't like it when it can't work out where a limb is, and just makes something up, which often freaks out whatever the Kinect is driving... One of many reasons I avoid using skeleton and prefer depth blobs. Otherwise, you have to add your own checking routines to remove limb markers between your middleware and output, or disable whatever that limb is driving when the coordinates exceed screen / stage / world boundaries or just vanish.

                                      rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
                                      Warning: autistic - may come across rather blunt and lacking in humour!

                                      1 Reply Last reply Reply Quote 0
                                      • First post
                                        Last post