Hey; just an FYI - yo can add the slash in the Live set up window but it is a bit of a pain to do!
Your probably know this already but may help other users.
Yes, but with the free version, you cannot save it, so each time to dote it is a real pain, good reason for Osculator…
@jhoepffner Thank you for your reply. It is on Mac and I believe I am working with Kinect V1 (model 1414 which was advisable that is works with Isadora then the newer version) and Skeleton. I have not considered or did not know about the depth map. What are the advantages of working with depth map? Do you use the two Kinects with depth map? Thank you for the info on using Processing. That is something I wanted to explore and glad that this is the right way to go about - to expand on that: do I need to code in processing to get that result? Yes Syphon is something I am trying to explore too, still very new to this tho. The project is pilot project with G4A funding so it is RnD in a way, to test, explore and find ways of working. Lucie
– V1 (1414) and V2 are not working at all with Isadora directly (for the moment…) V1 is more supported on Mac compared to V2, but you can send different informations from both to Isadora using other softwares.
– for V1, the best way is with Processing (see Isadora tutorial), you can have skeleton infos via OSC and different images via Syphon. With a little Processing knowledge you can prepare infos before sending. The Isadora tutorial use SimpleOpenNi library working only with Processing 2, with Processing 3 you need to use the more modern Open Kinect for Processing, but no skeleton there, only with Delicode Ni-mate.
– for V2, you can send skeleton infos from Delicode Ni-Mate through Osculator to Isadora via OSC and images (included depth map) with Processing (Open Kinect for Processing library)
– I use depth map because my kinects are zenithal and because I need a map I can analyze. I send the pre-analyzed (threshold) image from Processing (with a shader) via Syphon to Isadora for tracking, for masking and for different image spatial and temporal treatments.
@jhoepffner Thank you this is all very helpful. I am trying to text this idea, which it says that it works with Processing 3. So if I understand it correctly from your reply, if have kinect V1 it will only work with Processiong 2. right? The video below is something I am trying to do. (The codes are available to download) Also I am reading about the Depth Map, which is used with Kinect V2. So if I would want to purchase V2 for Mac where I would be looking for one? What model? Do you have any examples of your work with V2?
Perhaps begin with an easier example, here are so many different and complicate propositions…
corrections: kinect V1 work with processing 2 and 3, it's the SimpleOpenNi library for skeleton who works only with processing 2 at the moment (there is a improved version for P3 on github, but it doesn't work for me). All V2 model are working but you need the "kinect adaptor" from Microsoft to use it and an USB3 plug.
A work made with V2 depth map is here
@jhoepffner Thank you for sharing your work. Very interesting what you can achieve with the depth map. Something to test perhaps. Regarding the post I have posted about the Processing sketch - I know it is advanced stuff at the moment for me, however it is perfect for the project I am working on. I need some 3D images when motion tracking and this is sketch from github looks excatly what I am looking for. Therefore I am looking for someone urgently who would help me to get this up and running. Anyone you may know who knows Processing 3 and Kinect? I logged this on the Processing forum and managed to download all libraries, but have an error about kinect scanner, which I believe its something to do with libfreenect manager (installing the terminal drivers) right? Anyone out there can help me urgently? I would like to have this up and running ideally by this Satruday. It would be paid work, received funding. I would really really appreciate that help. Lucie
Where are you situated on the globe?
If you're in New York City, I can probably assist you.
Hi Jacques and Lucas and everyone,
In workshops I generally use NIMate to syphon the User ID (colour coded figures) Live View out to Isadora (kinect2 on Mac) - is it possible to do something like this in Processing? - even if the figures are all the same colour, I just need to separate them out from the background in a nice, stable way (for a 3-week always-on installation piece)
Hi there @dbini,
Why do you want to use processing instead of using Nimate ? It is possible to do this in Processing, but personally I would rather use NiMate then Processing..
@Juriaan - i'm trying to keep costs down and don't want the bunny logo appearing every 10 minutes.
I also would be interested how to get something like the ghost image from NI-Mate from something cheaper or free software
This may help:
The process should be very similar when using the first kinect (kinect 360).
@Skulpture is there the ghost image in the free NI-Mate version? Watermark?
@crystalhorizon - you can get ghost out of NIMate via Syphon, but the watermark pops up every 10 or 15 minutes if you don't have a license.
@dbini I see
Try the follow extension for Max 7. I used it in the past for installation work / performances using the Kinect 2. You can try it for free for a limited time, after that the artist will ask 26,00 euro for continue usage. Max is 9.00 EURO a month, but when you are done with building the max patch you can close the sub for Max since you can build the application and off you go :smile:
dp.kinect2 is a plugin, an external extension, for the Cycling ’74 Max development environment to use your Microsoft Kinect v2 sensor (XBox One) on your Windows PC.
- Color image, depth, and IR sensor output in many pixel formats
- User identification, location, and occlusion
- Skeleton joint tracking with orientations
- Body properties, hand tracking, lean, body restriction
- Point clouds, accelerometer, and gravity
- Sound location and strength; speech recognition
- Face tracking with pose, rotation, translation, bounding boxes, key 2D and 3D points, smiling, eye engagement, eye open/closed, mouth open/closed, skin/hair color; face 3D modeling with animation/shape units
- Data alignment, filtering, smoothing, rotation to gravity
- jit.anim.nodedp.kinect and jit.openni to aid in migration
- Help file with examples and links to online tutorials and documentation
- Support for collections, packages, and executables
- Tested against Max 7 and Max 6 for both 32 and 64-bit Windows platforms
- Based on official Microsoft Kinect v2 drivers for reliability and support