assurance-tunnel
assurance-tunnel
assurance-tunnel
assurance-tunnel

How to make the content of a mask follow the mask


  • Beta Gold

    I have a quite simple setting. a ghost image from processing / kinect is the mask. A text draw actor scrolls the "karaoke" style text vertical. being in a quite static position in the beginning the dancer now moves quite a bit on stage.

    how can i make the text follow the x/y of the syphon mask? do i need to do this in processing or is there a way in isadora? which i would prefer because i´m new to processing.
    thanx for your help


  • You can use eyes++ to get the position of the mask in Isadora, however it may well be better to do this in processing with openCV, which will give you quite a bit of information about the position and shape of the mask.


  • Beta Gold

    @Fred

    how would i connect the syphon receiver with eyes++. so far i only used it with a webcam. i think being not a programmer i need a little more detailed information :(
    sample patches are the easiest to increase understanding and knowledge about isadora. or a more detailed description.
    best


  • You will need to use the GPU to CPU video convertor, that is one thing about this method that is not as good as doing it in processing. It will be more efficient to run the tracking in processing and send the values via OSC to Isadora. The conversion you will have to do from GPU to CPU in Isadora is pretty taxing on your computer.

    There is pretty simple processing example called contours that will do what you need,

  • Beta Gold

    @Fred ...within my examples of processing2  there is no contours :( nor can i find it in the add library.



  • You need to add the opencv library to processing and then you will find a contour example (or something close to that name I not next to it now). It is a bit of a jump from Isadora but there is a lot of good help on the processing forums and tutorials to get you going. Oh and I am using processing's latest version 3.


  • Beta Gold

    @Fred thanx again for your support, but i stick to processing 2 as all the kinect/isadora support is based on processing2 as far as i understood.

    i´ll give it a try tomorrow...


  • There is opencv for processing 2, I was just looking at 3 as that is what I have. It will be pretty much the same.


  • Beta Gold

    @Fred i managed to install, but this sketch simply finds the contour of an image. it won´t move my text or follow the ghost image of my kinect. maybe i explained badly.

    i attached a pic and a short movie for better understanding. eyes sound ok too, but i have no clue where to start. syphon receiver provides me with a single video out. what after? :( or can i define a "ghost image" like a camera?  I have a week to go but if worst comes to worst i shift the text with a slider. Although its a fake then. 

    6be337-screen-shot-2016-02-25-at-10.33.35.png 42aeab-text2.zip


  • Tech Staff

    Ni Mate sends OSC and a ghost image


  • Beta Gold

    @crystalhorizon i´m trying to avoid Ni Mate as it has not proven to be very stable on my MBP. And i like the possibilities of Processing. Unfortunately i´m a newbie to this program. Yet ;)



  • I know what you want to do, there is a little more work to do than open the example, what is happening is that the contourFinder is going to iterate through all the contours (the shapes that are found in a video frame). At the moment in the example it just gets all the points of the contours and draws them. However for each contour you can also call contour.getBoundingBox(); this will return a rectangle that is the smallest rectangle that can fit around the shape of the contour. From this rectangle you can calculate the centre of the rectangle and hence the centre of the contour and hence the centre of the body (well this will change depending on the positions of the limbs).

    It should get your pretty close to what you need, you will probably have to add some smoothing as when the body does not move but say the arms are outstretched you will get a change in the centre position as it will be dealing with the bounding box for the whole shape.
    You can also do some more precise calculation based on the spread of the points that make up the contour if you are feeling tricky.
    There is another great free open source option for doing this instead of processing, http://www.tsps.cc/ this is a fantastic product and in my opinion better than the paid Ni Mate, more options and more intelligent, and of course open source if you wish to make changes.


  • @gapworks NI Mate v1 isn't coping well with the new USB structure in OSX 10.11 El Capitan. And v2 has a problem with Kinect sensors plugged into USB3 ports. Delicode are working on the latter problem, I assume the former isn't going to be solved as v1 is no longer supported. good luck with Processing.



  • @gapworks The kinect/processing tutorial sketch (I assume that’s what you’re using) sends torso position over OSC by default... use that to control your text’s X/Y position.

    From the default Izzy file within the download, add 2x OSC Listeners, listening to channels 7 (x) & 8 (y). Link value output of these to x / y position of your text draw actor. Use calc actors in-between to apply any required offsets or scaling to the values. The torso z position (distance from sensor) should be on OSC channel 9 if needed.


  • If you remove the following two lines (365 & 367) from the processing sketch:

    canvas.stroke(userClr[ (userList[i] - 1) % userClr.length ] ); drawSkeleton(userList[i]);
    ...then you can run with skeletons enabled without actually drawing them on screen, which will initialise sending of the OSC data.
    If the centre of gravity still shows up (that should say WHEN, cos it will), then delete lines 375 to 390 as well.


  • @dbini - I sent them links to the published fixes for the whole USB3 issue probably over a year ago (possibly over 2 years ago now that I think about it), along with links to fixes for kinect motor not working. I was told then that they wouldn’t fix the issue. These issues have been around since NI-Mate v1 and are all down to the version of libusb that they’re utilising. Same issue as with the SimpleOpenNI Processing Kinect modules. I’m presuming they don’t have the ability to update the version or implementation of libusb that’s included within the OpenNIv1 libs that are necessary to provide Kinect v1 support that allows skeletons, which means it probably won’t ever get fixed in v2 and they’ll end up dropping Kinect v1 support. The specific problems are within Delicode_NI_Mate/Contents/Frameworks/libOpenNI.dylib, libusb-1.0.0.dylib and stable_libusb-1.0.0.dylib

    S’why I ended up abandoning several hundred pounds worth of investment in their software (Ni-Mate & Z-Vector) - they weren’t prepared to fix known bugs, or even add an announcement to warn future purchasers of the unavoidable compatibility issues - a practice they continued when they released v2 with no announcement re: USB2 Kinect & USB3 ports. S’why I moved over to Processing for Kinect stuff in the first place. *shrug* - Joys of commercial software relying on out-of-development open source software.
    TL:DR;  ANY & ALL software relying on OpenNI1 to provide Kinect rev1 support will have issues with USB3 ports. That means any & all software which provides skeleton output from a rev1 Kinect.


  • @Marci - this kind of stuff is all beyond my capabilities. I read your posts and am constantly impressed by your level of detail. Thanks for your contributions to the Isadora community.

    I just want to get a toolkit that works, and I don't mind paying a bit for something that's going to be plug and play and solve my problems. i do object to buying a license for something that's going to be useless in 1 or 2 years. here's what Delicode said in reply to my questions:
    "OS X El Capitan is still proving to be a huge problem due to the operating system USB system having been changed. We have a few ideas on how to fix this, but this will take a while. For now using Kinect for XBox 360 on El Capitan is not recommended, and staying in Yosemite is a safer bet."
    fortunately I just got my MYO working nicely, so am going to focus on that for a while and hope to find a simple solution for kinect sometime in the future.


  • If you want any form of longevity from Kinect, bin the rev1 (or resign it strictly to no skeleton/user detection and use Freenect based solutions rather than OpenNI 1) and get a KinectONE instead... openni 2 is maintained & active and has no issues.


  • Beta Gold

    @Marci I have only KinetctONE. Two of them! And please do keep in Mind that you are talking to a designer and Photographer who is trying his best to learn some programming Languages. Processing at the Moment. So i failed after rev1.....:)

    best


  • (That was for @dbini)


Log in to reply
 

Looks like your connection to TroikaTronix Community Forum was lost, please wait while we try to reconnect.