• Products
    • Isadora
    • Get It
    • ADD-ONS
    • IzzyCast
    • Get It
  • Forum
  • Help
  • Werkstatt
  • Newsletter
  • Impressum
  • Dsgvo
  • Press
  • Isadora
  • Get It
  • ADD-ONS
  • IzzyCast
  • Get It
  • Press
  • Dsgvo
  • Impressum

Navigation

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Popular
    • Tags

    [ANSWERED] How do I use Effects with openNI Tracker (or motion capture)?

    How To... ?
    4
    5
    766
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D
      delToroSanchez last edited by Woland

      Hi guys! I'm a newbie who is currently taking interactive video class. I've been exploring Isadora software and been watching tutorial videos but I'm still confused how can I accomplish the video that I want to work on. 

      I was planning to use openNI tracker and a projection on floor. I want to make it interactive by making/causing ripples underneath people's feet as they walk around the floor (water projection). And it'd be awesome if I also could add water sound as people walk around the water projection... 

      I've been trying to work on this but I have no idea how could I accomplish this... if anyone could assist or help me in anyway, it would be very appreciated ;-; 

      Sincerely... 

      D 1 Reply Last reply Reply Quote 1
      • D
        DillTheKraut @delToroSanchez last edited by DillTheKraut

        @deltorosanchez

        The openNI tracker is meant to be used with 3D depth cameras like kinect or orbbec. Mainly to recognise human bodies, which isn't easy from above obviously. Maybe, it is easier to use the depth video stream, offered by the openNI actor, coming from those depth sensor. (See 'output depth' value). In combination with the eye++ actor, which is tracking 'hot spots' in an b/w image, everything which is beyond a set threshold distance above the floor, would be recognised, tracked and could trigger something like a sound/ effect actor. (depth min/max values).


        see this example

        1 Reply Last reply Reply Quote 0
        • dbini
          dbini last edited by

          bear in mind that depth cameras won't work from overhead - they are looking for human skeleton shapes, so need to be placed on the stage - best at around 1m from the floor.
          It is possible to get x and z coordinates of the people on stage from a depth camera, and then map a video effect to these coordinates and project that onto the floor from a projector above the stage. BUT - all cameras see a cone of light, and depth cameras like the Kinect are no different. you would only be able to track a triangular area of the stage, unless you use multiple cameras.
          I have achieved the kind of effect that you want to make by using a camera overhead, mounted close to the projector. a black and white cctv camera with an infrared pass filter will see anything that is lit with infrared light, but will not see the projected image. i lit my actors with IR that is focused off the floor and used Eyes Actor in Isadora to track the resulting blobs.
          You will need a lot of headroom - the projector and camera should be at least 5m from the floor (its possible to use super wide lenses if things are rigged lower, but then the distortion of the image starts to cause problems)

          John Collingswood
          taikabox.com
          2019 MBPT 2.6GHZ i7 OSX15.3.2 16GB
          plus an old iMac and assorted Mac Minis for installations

          D 1 Reply Last reply Reply Quote 1
          • D
            DillTheKraut @dbini last edited by DillTheKraut

            @dbini

            regarding the depth sensor functionality restrictions, you are right regarding the body recognition function in openNI.

            But as I mentioned, this is 'only' an 'extra feature', on top of the depth readings. Placed above the stage, only using the gray scale depth picture, you can 'cut out' everything below a defined level above the stage. Therefor everything below e.g. 1 meter above stage would be shown as black. Everything higher than 1 meter above stage level would be a gradient of gray to white, depending on value settings. This would give you a similar resulting black and white image like the IR images.  This could then be processed by the eye++ in the same way. In fact, most sensors, like the kinect are based on an IR point cloud system. This is good to know, as IR intense lighting can corrupt the depth sensor data, and therefore the body tracking as well.
            A downside of the depth sensors, are mostly restricted vision angles. As you write your self, this can be told about cctvs as well. But they are more flexible as one can chose cameras with certain fixed, or even interchangeable and zoom lenses. But on the other hand, getting the environment right for pure IR tracking can be tricky.

            Armando 1 Reply Last reply Reply Quote 3
            • Armando
              Armando Beta Gold @DillTheKraut last edited by

              @dillthekraut Yes I used kinects in both ways (frontal to get the skeleton) and on top (to get clean zenithal tracking) so I don't need visible light cutoff filter in front ofthe camera (kinect has one) and I don't need to put infrared light and therefore I don't need background subtraction.


              Depth cameras still rock

              Armando Menicacci
              www.studiosit.ca
              MacBook Pro 16-inch, 2021 Apple M1 Max, RAM 64 GB, 4TB SSD, Mac OS Sonoma 14.4.1 (23E224)

              1 Reply Last reply Reply Quote 1
              • First post
                Last post