• Products
    • Isadora
    • Get It
    • ADD-ONS
    • IzzyCast
    • Get It
  • Forum
  • Help
  • Werkstatt
  • Newsletter
  • Impressum
  • Dsgvo
  • Press
  • Isadora
  • Get It
  • ADD-ONS
  • IzzyCast
  • Get It
  • Press
  • Dsgvo
  • Impressum

Navigation

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Popular
    • Tags

    How to have a person move an object that is generated/projected by Isadora

    How To... ?
    13
    50
    34097
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Marci
      Marci last edited by

      The trick would be utilise bits of Isadora I've never really delved into... 3D Projector etc.

      It's deciding that Z plane that makes it usable. Without it, the triggering is too vague and unpredictable, but as soon as you say 'I no longer want to look just at X and Y, but Z too' it refines it in a bit and becomes infinitely more useful. The NIN video above is actually incredibly simple to pull off in isadora. 
      There would need to be a more refined loop to accomplish what I was going for in the patch - that is, to be able to grab and move and let go of something, akin to left click drag on a window. The position needs feeding back within the JS rather than using actors to smooth it all out... I was a bit ambitious for a quick 2 minute job. Changing color as you pass over however should be simple, therefore, having each "rectangle" as a paused movie with a still frame at the front for the "steady state" which then plays as you "hover" should be equally simple to achieve. Disconnect the hot and vert position inputs from the top shape actor to stop it dragging (badly). Output 10 on the upper-right JS sends a 1 state when you're hovering over the rectangle. As you pass over it it'll go from 0 to 1 back to 0 again as a hard switch.
      NB - also used @DusX's umbilical chord / data multicore JS in there too... check it on his blog to get to grips with what's happening between the two javascript actors.
      I've said it before and I'll say it again - this is where some semblance of a DOM is desired in Isadora; a transparent webbrowser actor overlayed on the stage with ability to scale resolution (i.e.: render a browser window of 1024x768 on a 640x480 stage). It could make it incredibly easy to utilise existing user-friendly frameworks (e.g.: jQuery) to get some very easy interaction effects, and also animation libraries... and there are PILES of collision detection libraries etc out there in web land. If such an actor existed, you'd then have WebGL to throw into the mix, and all the power of the browser and the knowledge of web developers & LOTS of tutorials everywhere. *I dreamed a dream...*

      rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
      Warning: autistic - may come across rather blunt and lacking in humour!

      1 Reply Last reply Reply Quote 0
      • J
        JetJaguar last edited by

        Pretty darn impressive Marci! This all makes sense more or less. Need to get into it to see how it works. The Z plane does appear crucial unless I would imagine the dancer was say flat against the wall or being shot from above and using their feet to kick an object.

        I'm using Processing right now and never had tried NI- Mate.  As I said I'm new to this.  I downloaded a free version of NI-Mate 2 but it doesn't send/receive OSC.  For basic or Pro they are now all subscription based?!!   Curious tips on NI-Mate.
        Also looking into building my little IR system to go onto my dancers hand.  Figure on on front of hand and one on back should work.  Any suggestions re NI-Mate?  Looks like batteries and IR Led and I have my light for IR detection on the Kinect. 

        Portland, Oregon
        Mac Pro Retina 2013: 2.6 GHz Intel Core i7, 16 GB DDR3 Ram
        iMac 5K: 4 GHz quad-core Intel Core i7, 32 GB DDR3 SDRAM

        1 Reply Last reply Reply Quote 0
        • Skulpture
          Skulpture Izzy Guru last edited by

          Slow reply but in relation to (@Marci )

          "Also shouldn't be too difficult to read Z if memory serves (**does NI-Mate return XYZ for a hand, or just XY**? Can't recall) also, and then create an invisible interaction 'plane' at a static Z point so that you can work in front or behind.
          Might get the Leap Motion out tomorrow and have a play small scale."
          Ni-Mate sends XYZ yes. 

          Graham Thorne | www.grahamthorne.co.uk
          RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
          RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
          RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

          1 Reply Last reply Reply Quote 0
          • Marci
            Marci last edited by

            @joejdrums - For Ni-Mate you want v1 ideally, which has a lifetime license but basically will receive no further updates. I despise subscription licensing... sadly I think Delicode have just ruled themselves out of the market at our level really. I'd have to have a ratch to see if their are any more user-friendly offerings out there, otherwise yer stuck with continuing with Processing. If memory serves I think simpleKinect on GitHub was the go-to for this...

            @Skulpture - yep I discovered that in the end! Ta!

            rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
            Warning: autistic - may come across rather blunt and lacking in humour!

            1 Reply Last reply Reply Quote 0
            • J
              JetJaguar last edited by

              downloaded simpleKinect and am going to look into it.  Not able to find NI-Mate v.1.  The project has potentially changed a bit.  In meeting with the choreographer this was shown to me as the latest idea.  It seems canned?  Possibly using tracking but curious all of your opinions as it is the evolution of my initial question.  Trying to learn what Izzy is capable of before going too far down rabbit holes.

              Look at the 2:20 mark specifically of this [video](https://www.youtube.com/watch?v=-wVq41Bi2yE)

              Portland, Oregon
              Mac Pro Retina 2013: 2.6 GHz Intel Core i7, 16 GB DDR3 Ram
              iMac 5K: 4 GHz quad-core Intel Core i7, 32 GB DDR3 SDRAM

              1 Reply Last reply Reply Quote 0
              • Marci
                Marci last edited by

                Nice effect. Doubt it's canned if AEF were involved...

                I'll have a ponder.
                At the moment my mind's heading off to MagicMusicVisuals if I'm honest, converting the OSC to MIDI to use as a deformer on an Interactive GLSL Shader patch. Finding / authoring a shader patch to give the visual effect you want however would be the domain of... well... a GLSL Shader authorer type d00d. Beyond me... I just swipe ones freely available online. (Is Izzy capable of this yet or do we have a timeline for it @Skulpture?)

                rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
                Warning: autistic - may come across rather blunt and lacking in humour!

                1 Reply Last reply Reply Quote 0
                • J
                  JetJaguar last edited by

                  Marci...you have sufficiently lost me.  I do not know what you are talking about.  Sorry.  GLSL Shader authorer type d00d.  I have heard of open GL, understand basically what a shader is after looking it up.  Wonder if Max/Msp/Jitter might be useful here.  How does Izzy interact with Max?

                  Portland, Oregon
                  Mac Pro Retina 2013: 2.6 GHz Intel Core i7, 16 GB DDR3 Ram
                  iMac 5K: 4 GHz quad-core Intel Core i7, 32 GB DDR3 SDRAM

                  1 Reply Last reply Reply Quote 0
                  • timeg
                    timeg last edited by

                    Just thought I would wade in with a historical reference that might just trigger some other lines of thought. (admitting i need to go back thoroughly through the thread)

                    In the early 1990's I used an Amiga based system called Mandala. It was based on an 8bit video input card 'Live'. The Mandala software achieved very fast and reliable collision detection by allowing one to assign certain of the bit planes to the incoming video signal and others to graphics in ones environment. When different bitplanes collided in the same pixel space a range of functions could be triggered. it included gravity and attachment to left and right X coordinates. Surprisingly useful, a performer could grab looping anims move them around then throw them off with a flick of the wrist. There are many videos of performance and installations i did back in the day on my site.
                    One of the first things I did when I came to Isadora in 2009 was try unsuccessfully to recreate facsimiles of some of that work of mine. I use Kinect, Leap, InfusionIcubeX and a bunch of other sensing devices. Like vanakaru i am not overly fond of kinect and find video camera solutions more reliable out of the studio.
                     I think there would be a strong place for a collision detection actor as described by Marci. I too dream a dream!

                    @timeg ---> Tim Gruchy | www.grup.tv | MPB (2021) M1max, 64G, OS12.5 | IZZ 3.2.6 | located in Sydney Australia

                    1 Reply Last reply Reply Quote 0
                    • Skulpture
                      Skulpture Izzy Guru last edited by

                      I agree. What has always baffled me is this:

                      There are ways of using javascript and other advanced calculations to detect two flat edges and when they it each other they bounce off each other. this can be done using sprites and envalope generators and inside range floats, etc. 
                      But with a kinect the edges are of course not just flat; they move and change quite rapidly - as well as having many edges. No idea where to even start with that! 

                      Graham Thorne | www.grahamthorne.co.uk
                      RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
                      RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
                      RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

                      1 Reply Last reply Reply Quote 0
                      • J
                        JetJaguar last edited by

                        Collision detection actor sounds like it is needed and would be helpful to many but that implementation would be a challenge.

                        Portland, Oregon
                        Mac Pro Retina 2013: 2.6 GHz Intel Core i7, 16 GB DDR3 Ram
                        iMac 5K: 4 GHz quad-core Intel Core i7, 32 GB DDR3 SDRAM

                        1 Reply Last reply Reply Quote 0
                        • Marci
                          Marci last edited by

                          "But with a kinect the edges are of course not just flat; they move and change quite rapidly - as well as having many edges. No idea where to even start with that! "

                          What are we talking here... the edges of what? What is interacting with what in which direction? I'm only talking using the OSC Skeleton points. I guess you're talking about something in some way interacting with NI-Mate's ghost output/body outline rather than just the simple hand / foot / head co-ords etc?
                          NB: I'm certainly no advocate of Kinect in any massive way - I just happen to have some kicking about to experiment with easily in front of the TV hence working down that route.

                          rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
                          Warning: autistic - may come across rather blunt and lacking in humour!

                          1 Reply Last reply Reply Quote 0
                          • Marci
                            Marci last edited by

                            PS: Izzy Interacting with Max... OSC. Think these days pretty much everything talks OSC.

                            @timeg - what's your website address? Curious to see...!

                            rMBP 11,3 (mOS 10.13) / rMBP 11,4 (mOS 10.14) / 3x Kinect + Leap / TH2Go
                            Warning: autistic - may come across rather blunt and lacking in humour!

                            1 Reply Last reply Reply Quote 0
                            • Skulpture
                              Skulpture Izzy Guru last edited by

                              @Marci I was talking about the depth image from a kinect. SO the multiple edges are the "outline" of a person.

                              Graham Thorne | www.grahamthorne.co.uk
                              RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
                              RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
                              RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

                              1 Reply Last reply Reply Quote 0
                              • DusX
                                DusX Tech Staff last edited by

                                Wow, I got to this thread late in the game.

                                3D collision detection should be pretty easy in Isadora as long as you have good data.
                                I would be tempted to use 2 IR cameras. One in front and one above.
                                XY from the front, and Z from above. In anycase adding the Z to the Quad detection should be easy.
                                I have only played with rects and circles for detection, 
                                but if you want to determine more complex relations I would suggest looking into 'Point in Polygon' detection. see: https://github.com/substack/point-in-polygon
                                Since the number of points in a polygon may vary, it becomes important to pass data sets together (unless you are defining a specific form). It will help keep your patch clean. see: http://dusxproductions.com/blog/pro-tip-single-patchcords-multiple-values/
                                I would think that you could define a dynamic polygon based on the skeleton data that would be close to the actual figure.
                                Then using this 'Point in Polygon' method you should be able to make the collision detections required.
                                Again I would 'fake' the 3D to some extent by make X number of Z regions (just lowering the resolution in this dimension really)
                                Its great to see that my blog has been a help :)

                                Troikatronix Technical Support

                                • New Support Ticket Link: https://support.troikatronix.com/support/tickets/new
                                • My Add-ons: https://troikatronix.com/add-ons/?u=dusx
                                • Profession Services: https://support.troikatronix.com/support/solutions/articles/13000109444-professional-services

                                Running: Win 11 64bit, i7, M.2 PCIe SSD's, 32gb DDR4, nVidia GTX 4070 | located in Ontario Canada.

                                1 Reply Last reply Reply Quote 0
                                • Skulpture
                                  Skulpture Izzy Guru last edited by

                                  Some context to my idea/dream 

                                  https://www.youtube.com/watch?v=jKB0d9vsfgA 

                                  Graham Thorne | www.grahamthorne.co.uk
                                  RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
                                  RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
                                  RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

                                  1 Reply Last reply Reply Quote 0
                                  • Fred
                                    Fred last edited by

                                    And here again a unified coordinate system either normalised or pixel based would make comparative calculation and position in Isadora one million times easier.

                                    http://www.fredrodrigues.net/
                                    https://github.com/fred-dev
                                    OSX 13.6.4 (22G513) MBP 2019 16" 2.3 GHz 8-Core i9, Radeon Pro 5500M 8 GB, 32g RAM
                                    Windows 10 7700K, GTX 1080ti, 32g RAM, 2tb raided SSD

                                    1 Reply Last reply Reply Quote 0
                                    • Skulpture
                                      Skulpture Izzy Guru last edited by

                                      I've found an old patch that @gavspav made for me when talking about this a while ago.

                                      I can't get it to work but it may be handy for some. 

                                      d0d4c5-bounce-off.izz

                                      Graham Thorne | www.grahamthorne.co.uk
                                      RIG 1: Custom-built PC: Windows 11. Ryzen 7 7700X, RTX3080, 32G DDR5 RAM. 2 x m.2.
                                      RIG 2: Laptop Dell G15: Windows 11, Intel i9 12th Gen. RTX3070ti, 16G RAM (DDR5), 2 x NVME M.2 SSD.
                                      RIG 3: Apple Laptop: rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

                                      1 Reply Last reply Reply Quote 0
                                      • DusX
                                        DusX Tech Staff last edited by

                                        @Fred It's in the feature requests. When you say normalized, what do you mean? I think the precentage measure isadora uses can do the job. It's just knowing which elements are based on stage width vs height that makes calculations difficult. Personally I like the scalability of this method. Using a little care you can make projects that adapt to different displays very easily.

                                        Troikatronix Technical Support

                                        • New Support Ticket Link: https://support.troikatronix.com/support/tickets/new
                                        • My Add-ons: https://troikatronix.com/add-ons/?u=dusx
                                        • Profession Services: https://support.troikatronix.com/support/solutions/articles/13000109444-professional-services

                                        Running: Win 11 64bit, i7, M.2 PCIe SSD's, 32gb DDR4, nVidia GTX 4070 | located in Ontario Canada.

                                        1 Reply Last reply Reply Quote 0
                                        • J
                                          JetJaguar last edited by

                                          DusX and Skulpture,

                                          How to track an infrared LED in Isadora?.  I can see an infrared light using Processing and simple open NI tracking IR; it is being picked up by the Kinect.  Cannot figure out how to either see and use the IR LED to be tracked in Isadora and then used as a trigger.  What would that actor configuration look like?  Currently I'm simulating moving particles on the X, Y planes with a Mouse Watcher but would like to replace it with the IR LED as both DusX and Vanakuru suggested.

                                          Portland, Oregon
                                          Mac Pro Retina 2013: 2.6 GHz Intel Core i7, 16 GB DDR3 Ram
                                          iMac 5K: 4 GHz quad-core Intel Core i7, 32 GB DDR3 SDRAM

                                          1 Reply Last reply Reply Quote 0
                                          • dbini
                                            dbini last edited by

                                            infrared camera -> video in watcher -> sprite (to calibrate size and positioning) -> eyes -> gives you an x and y for the infrared dot.

                                            John Collingswood
                                            taikabox.com
                                            2019 MBPT 2.6GHZ i7 OSX15.3.2 16GB
                                            plus an old iMac and assorted Mac Minis for installations

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post