[ANSWERED] how to mapping onto moving person
RIL last edited by Woland
Hello everyone ! looking for advice on how to project onto a moving person. The idea is that the projection is "mapped" on the moving body
The scenic space is circular. An actor with white clothe, standing on a table in the center of the space, that is all.
I imagine that I will need at least 2 projectors to bathe the actor in light, motion tracking system and what else? All info is appreciated. Thank you very much
Things I can tell you will be difficult to deal with:
- The biggest hurdle to overcome in mapping onto a moving target is the difference in the position of the lenses of the tracking camera(s) and their corresponding projector(s). You want each camera and its corresponding projector to have their lenses as close to the same position as possible so that the warping and correction you need to do to compensate for the difference is minimized.
- The second (or tied for first) biggest challenge you're going to face will be that you're trying to track the body and project onto it from both sides. The cheapest (but not necessarily the best) ways to track and map video onto a moving target involve infrared light sources in one form or another. (Kinect is good for small area tracking. Larger areas where you only need to track and project from one direction you can flood the wall behind the performer with infrared light and use an IR-sensitive camera in front of the performer so that the performer shows up as a dead spot in the IR camera feed, which gives you the silhouette you need to start warping in order to get it mapped onto the person). If you're shooting infrared light at it from both sides and have IR-sensitive cameras on both sides, they're likely going to at least interfere with each other and decrease the accuracy of the IR tracking.
- Unless you use a very expensive system like BlackTrax, the faster the performer is moving, the harder it will be to keep the projection mapped onto them accurately. Solutions for this are to a) not mind the spill or the lag or b) have the performer move more slowly.
Knowing the following will likely help folks give you more finely-tuned advice:
- Your budget
- What equipment (if any) you already have (or to which you have access) in terms of cameras and projectors.
- The approximate size of the tracking space
- The position of the audience (e.g., if you're performing in the round it's harder to use some tracking methods than if you have an empty wall behind the performer that you can flood with IR light to get a nice silhouette of the performer.)
- The type of performance space
- The level of light control you have in the space (e.g. is it an indoor venue with no windows in the exterior walls, or are you performing outside [because sunlight can interfere greatly with many forms of motion tracking])
RIL last edited by
@woland Thank you very much for such a detailed report !!
It is certainly a complex thing...
From what my student tells me, this will be a low-budget project. I do not see the use of infrared lights and tracking systems possible. Perhaps try a less efficient or "perfect" approach that allows us to simulate in the best possible way the tracking and the mapping.
So, assuming we will use the Kinect as the only hardware + Isadora. My second big question is what would the patching be like inside Isadora? In order to finally be able to "feed" the performer's body with the projection?
Thanks a lot,
JJHP3 last edited by
@ril Wonderful reply - so amazing that Isadora has this sort of expertise available for such open-ended questions. Thanks from all of us!
liminal_andy last edited by liminal_andy
@ril I've done this a few times on a low budget. There are some tricks you could try:
Do you need live projection mapping or just live projection masking? The difference is that mapping would involve a tracking system of some sort and allow the media to follow the performer, whereas live masking is a bit easier to achieve, with the imagery being "revealed" by the performer but not actually moving.
If you can limit yourself to masking, I would try to frame the talent against black duvetyne fabric, which will absorb any spilt light from the mask. Then, place a webcam over the projector lens and use isadora's desaturate and contrast adjust actors to "blow out" the performer's white outfit and crop so that the output of this chain is the perfectly white clothing in perfectly dark surroundings.
Project this mask out of the projector, and attach a Zoomer Actor while the performer is T-posing so that you can warp and align your mask to their body. Do this in a few positions on stage. Add some Gaussian Blur, which will reduce perceived spill and also give the illusion of the image "wrapping" around the performer. Then, feed this as a mask for an Alpha Mask actor. You may want to invert the mask depending on your goals.
You can then bring any content you want into the Alpha Mask actor, and the area where the performer is occupying in the view of the webcam will roughly be the area where media is projected. Aesthetically, I suggest embracing the imperfection of this system in the performance. You are going to need hundreds of thousands of dollars to achieve a "perfect" result by combining dozens of IR blasters with several cutting-edge high refresh rate projectors and specialized cameras. That does not mean that the DIY approach is worthless though, you just need to respect the limitations and explore the implications of those limitations in the performance.
The first 30 seconds of my old reel on the homepage of https://www.andycarluccio.com/ show projects like this that I did a few years ago, for a sense of the results.
Maximortal last edited by
I did something similar some time ago.
My setup was kinect one ( the second version I mean ) connected to Ni Mate then trough Spout into Isadora. ( this was before native support of Kinect by Isadora)
As mentioned one of the biggest issue is the parallax between kinect and projector. What i did is to create manually a map surface made of three slices where i have adjusted by trial and errors the dimension in order to fit image into the body. The results was pretty good ( note that i was trying to have just a solid color on the body, not a texure or something complex) and the delay was aceptable.
I do not see the use of infrared lights and tracking systems possible. [...] So, assuming we will use the Kinect as the only hardware + Isadora.
It's important to understand that the Kinect's depth image relies on the emission and detection of infrared light. If I recall correctly, the way the kinect works is that it emits an array infrared dots and that infrared light is bounced off of the performer and back to the Kinect. In very simple terms, it's like a bat's echolocation. Emit sound (in this case light) which then bounces off of things and back to the bat's ears (or the Kinect's two IR lenses). Because there are two places that the input is received (bat ears or camera lenses), the bat (camera) can triangulate the location of the thing that the sound (light) bounced off of.
The reason that it's important to understand this is that if you have two Kinects in a room facing each other, they're both going to see the IR light emitted by the other Kinect and that interference is going to make both of them less accurate. The only way I've heard of people using multiple Kinects covering overlapping areas is by attaching a small motor to each Kinect that makes each Kinect vibrate at a different frequency. (Microsoft seems to have removed the article from their website but the Internet Archive preserved it: Article.) This solution might be beyond what your students are capable of rigging up, so be aware that if the Kinects' areas overlap, they're likely going to make each other less accurate.
RIL last edited by