[ANSWERED] Masking OpenNI Depth Video
-
Hi folks, I'm running the OpenNI Tracker Depth Video into Chroma Key and layering a video into the dancer's silhouette. Then projecting that back on the dancer:
I'd like to mask the video, effectively blacking out sections so it doesn't cover the full body. Then I'd like to gradually take the mask(s) away until the full body is revealed. I can't quite figure out how to do this, and would appreciate any suggestions!
-
@emullis have a look at Alpha Mask.
-
@emullis
depending on what shape you want to mask it to, an easy way would be to run the video feed through a Shapes actor. If you set the Shape to have transparent fill and black outlines - starting with a large value for the thickness of the outline, you can 'open up' the image by growing the size of the shape and shrinking the outline. This could also be done dynamically by using data from the dancer's movement, or from the generated image, or both. -
-
@dbini said:
depending on what shape you want to mask it to, an easy way would be to run the video feed through a Shapes actor. If you set the Shape to have transparent fill and black outlines - starting with a large value for the thickness of the outline, you can 'open up' the image by growing the size of the shape and shrinking the outline. This could also be done dynamically by using data from the dancer's movement, or from the generated image, or both.
@emullis Another method is to:
- Chain together a bunch of Shapes actors and feed that into an Alpha Mask actor
- Use the Skeleton data from the OpenNI Tracker actor plugin to attach each Shapes actor to a single skeleton point
- Then increase the size of each Shapes actor as you want that body part revealed
One thing to keep in mind with all of this is that your projector/beamer likely has an aspect ratio of 16:9, whereas the ghost image video feed you are going to get from the Kinect is going to be an aspect ratio of 4:3. This means that, stacked on top of each other, the projector's throw is going to be wider than the ghost image video feed. This means that all calculations involving stage width need to be modified (unless you change the output resolution of your projector/beamer to a 4:3 aspect ratio).
resolution-and-aspect-charts.zip
Another challenge will be keeping things centered because data from the real world (like skeleton data) doesn't come in as 'clean' ranges (unlike, for example, the data output of a Wave Generator actor which will always output a number between 0 and 100). Because of this, you need to observe the incoming 'dirty' data, find the range (using a Hold Range actor), then scale it to a 'clean' range (using a Limit Scale Value actor). The User Actors in the background Scene of this file should help: https://www.dropbox.com/sh/z89zdgxi4wlgf3w/AAB_UyrTdOlEMOhNkG-IQ1sea?dl=0
Most importantly, you also need to keep the difference in the physical location of the Kinect lens and the projector/beamer lens to a minimum. If the Kinect is sitting on top of a projector onstage, then you only have to do minimal corrections to map the content onto a moving body by correcting for the small difference between the physical location of the Kinect lens and the physical location of the projector lens. If your Kinect is onstage and your projector is all the way at the back of the theatre, you're going to have an immensely difficult time correcting for the huge difference in the physical locations and angles of the lenses.
A final thing to keep in mind: After you do the Hold Range + Limit Scale Value actor data cleaning and/or the lens location correction to get the image to hit the performer, if you move the projector or the Kinect even slightly, you will need to re-calibrate the scaling of the data and the lens correction you've done.
-
@skulpture, @dbini, @Woland, thank you so much! I will experiment with all of this.
With gratitude,
e.