How to have a person move an object that is generated/projected by Isadora

  • Hello forum!

    I'm trying to solve a problem and don't know if Isadora is capable of this.  I want Isadora to project (what would appear to be) a static backdrop of images (maybe 3D?) on a wall.
    I then want a dancer to be in front of the projected backdrops and have them push some (specific if need be to work) or (ideally) any of the images. These images would then move somehow (sway, float, rock?).   I don't want it to be canned, but actually interactive.  
    See the attached image.
    Thank you


  • You need to read dancer movement and use that data to manipulate video projection. There are many possibilities to track the movement - camera, camera +IR lighting, kinect, various sensors via Arduino to mention some.

    Also these kind of projects have been discussed on Isadora forum over the years many times. 
    I have done few installations using these tools with Isadora. While it is not too difficult to use input data in Isadora the hard part is to create usable( precise) the data with sensors.

  • You will need to track the movement of the dancer and coordinate the position of the hand with the relative position of the beamer. This can be quite complex and there are many options, judging from your question this would involve a lot of learning. It also needs a lot of equipment and making the right choice will require either a lot of experience or experimentation and possible a lot of money. There are many things about this particular scenario that make it difficult.

    If the work is for a serious upcoming show I would advise you look for someone to work with you that has this experience and maybe they will let you  watch and learn. Otherwise I would start searching this forum for tracking and skeleton tracking, there is a lot of good information/examples/tutorials and step by steps to get you pretty close. 
    You can also look for a kind of pre-made solution like this 
    In this case though canned may be the best option, getting the interaction just right can be difficult and expensive and in the end your audience would not know the difference- the solution that lets the show tell its story without causing production headaches is a good solution. In this case I could guess that canned may be a better option without a lot of experience in tracking and resources for extra equipment and may work much better in the end.
    In general you need to know the position of the hand in all three axis so a depth sensor or multiple cameras/sensors would be needed. If you want swaying I assume you also need to know the speed of the movement so you can adjust the sway amount was well.
    Making an interactive video wall is quite a task, good luck.

  • Fred and Vanakaru,

    Appreciate your responses and yes I have a lot of learning.  I did a workshop with Mark this summer and know how to track the skeleton using Ni Mate or Processing.  I understand how to use 3D actors enough to align, calibrate and track objects and move them in space.  But I don't want a continuous connection.  I want the static to move when pushed.  Sorry if that wasn't clear, or maybe it was?
  • Tech Staff

    Hey Joe,

    Monty here -- how was the rest of the summer?
    What you are talking about creating is fundamentally object collision detection. I'm not sure if anyone has built something that handles that using Isadora alone... you might be better off programming that element in something like Processing and syphoning over the result to Isadora.

  • Do not underestimate Isadora. I had an installation where in the projection of a viewers body was replaced with dancing headless nude while maintaining the position of the head of a viewer on nudes body. Was not too tricky to program.

    _But I don't want a continuous connection.  I want the static to move when pushed._
    This I do not understand - what is "_continuous connection" and what do you mean by "static"?_
  • Izzy Guru

    I think the feature that is missing is the 'collision' aspect. Telling a machine when a person hand in XYZ space meets a virtual object in XYZ space.

    The only software that I know that has done this is Kalypso by Frieda Weiss. But the software is not for sale. I am not sure if eMotion does it also? But that software seems very unreliable right now.

  • DusX's 2d Collision detection routine should give some hints on how to vaguely achieve this... would need to sit down and think it through thoroughly but am sure it'd be doable via some NI-Mate / OSC action... you'd just need to calibrate your space to your projection and ensure it stays fixed.

  • So, XY input (args[0] & args[1]) of the 'point' would be OSC hand position from NI-Mate... then set upper left and bottom left of an interactive square.

    You'd need a JS actor per interactive square. 
    Use the true or false output state to trigger your... er... hover state.
    [EDIT] Hmmm... now I want to recreate the Nine Inch Nails Echoplex projected drum sequencer using Kinect & NI-Mate and Izzy. Dammit.

  • Also shouldn't be too difficult to read Z if memory serves (does NI-Mate return XYZ for a hand, or just XY? Can't recall) also, and then create an invisible interaction 'plane' at a static Z point so that you can work in front or behind.

    Might get the Leap Motion out tomorrow and have a play small scale.

  • If you have a timer running on the scene you could also measure velocity of the movement at the time of interaction with 2x z planes... when the first z plane is broken the timer starts, when the second is broken the timer ends.

    Subtract start time from end time.
    Use distance formula to measure the distance between the start and end point: (although In an installation scenario, these would likely be a fixed constant.)
    Divide distance by time difference. Higher the result, faster the velocity. Use this to determine the severity of whatever movement you apply to the interacted with rectangle.
    Javascript opens up a h00j world of fun... and maths.


  • Hello Monty!  Good and busy summer.  Hope yours was also!  Thank you everyone for your feedback, and help.  I am going to be utilizing this community.  And hope someday I can offer help.

    I've got some work to do.
    vanakaru- good question and it is making me think more about this idea.  which is what it is at this point, an idea and it may prove beyond my current capabilities on Isadora or not doable at all in Isadora, but here goes. When tracking a person using a skeleton overlay and when a point on the skeleton (say the hand) moves, then an object moves that has been connected to it = continuous connection.  I don't want the object to move for ever movement.  Only when someone walks over to the objects being projected and "pushes" them= static, moved when pushed.
    So in re thinking my lay out, the grid design wouldn't necassarily cover the wall, it may be in the downstage left area only, so the other side of the stage could be empty for the dancer to dance in.  Then the dancer would walk over and start dancing by the grid.  When their hand "touched" one of the projected objects, it would be moved.  Hope that helps and thanks again all for helping.

  • My experience with kinect is somewhat limited. Mostly 'cos the system is too unreliable. I hate to loose connection for whatever reason/interference during a show.

    The most trustworthy solution have been where performer has IR led attached to the hand and camera with IR filter(sees only IR lightwaves) doing the tracking.
    Kinect is great and fun when it works, but look at the official games. Players loose connection all the time. The performance could be designed to make this chaos as part of the show but for minimalistic stuff ...I don't know.

  • Agree with single IR light point tracking - infinitely more reliable, mainly due to simplicity - s'how I do head tracking for simracing (rFactor 2).

    That said, attached is a quick vague (_very_ rough and no where near fully functional) PoC for collision detection of Left_Hand as OSC Controller from NI-Mate v1 & Kinect360  (represented by red dot) against a green rectangle using a fixed presumed Z plane. No velocity tracking etc, altho technically NI-Mate sends velocity as a value scaling DOWN from 127 the faster it goes on /Hand/Left/V if one cares to tap & utilise it.
    When you're in the zone the rectangle will move with your hand. When you're not, it won't.
    Based on @DusX's 2D Collision Detection as mentioned above.


  • Would probably have been simpler just to have the rectangle change color when you're in the zone and back again when you're not, but I've only just thought about that... meh

  • Very cool Marci...I will check this out.  Of course IR led's.

  • Izzy Guru

    I defiantly need to catch up on this thread!

    It's something I've been wanting to achieve for over 10 years inside isadora. 

  • The trick would be utilise bits of Isadora I've never really delved into... 3D Projector etc.

    It's deciding that Z plane that makes it usable. Without it, the triggering is too vague and unpredictable, but as soon as you say 'I no longer want to look just at X and Y, but Z too' it refines it in a bit and becomes infinitely more useful. The NIN video above is actually incredibly simple to pull off in isadora. 
    There would need to be a more refined loop to accomplish what I was going for in the patch - that is, to be able to grab and move and let go of something, akin to left click drag on a window. The position needs feeding back within the JS rather than using actors to smooth it all out... I was a bit ambitious for a quick 2 minute job. Changing color as you pass over however should be simple, therefore, having each "rectangle" as a paused movie with a still frame at the front for the "steady state" which then plays as you "hover" should be equally simple to achieve. Disconnect the hot and vert position inputs from the top shape actor to stop it dragging (badly). Output 10 on the upper-right JS sends a 1 state when you're hovering over the rectangle. As you pass over it it'll go from 0 to 1 back to 0 again as a hard switch.
    NB - also used @DusX's umbilical chord / data multicore JS in there too... check it on his blog to get to grips with what's happening between the two javascript actors.
    I've said it before and I'll say it again - this is where some semblance of a DOM is desired in Isadora; a transparent webbrowser actor overlayed on the stage with ability to scale resolution (i.e.: render a browser window of 1024x768 on a 640x480 stage). It could make it incredibly easy to utilise existing user-friendly frameworks (e.g.: jQuery) to get some very easy interaction effects, and also animation libraries... and there are PILES of collision detection libraries etc out there in web land. If such an actor existed, you'd then have WebGL to throw into the mix, and all the power of the browser and the knowledge of web developers & LOTS of tutorials everywhere. *I dreamed a dream...*

  • Pretty darn impressive Marci! This all makes sense more or less. Need to get into it to see how it works. The Z plane does appear crucial unless I would imagine the dancer was say flat against the wall or being shot from above and using their feet to kick an object.

    I'm using Processing right now and never had tried NI- Mate.  As I said I'm new to this.  I downloaded a free version of NI-Mate 2 but it doesn't send/receive OSC.  For basic or Pro they are now all subscription based?!!   Curious tips on NI-Mate.
    Also looking into building my little IR system to go onto my dancers hand.  Figure on on front of hand and one on back should work.  Any suggestions re NI-Mate?  Looks like batteries and IR Led and I have my light for IR detection on the Kinect.