Seeking teaching advice (1 kinect + 10-15 high school students)
-
Hi everyone
My question might be a weird one. I have used Isadora with high school students several times for projects (poetryfilms) in my creative writing classes. We've built interactive films using an XBox kinect and it has worked OK. But up until now, the collaboration has been with me on my laptop, which is loaded with Isadora and the OpenNi tracker plugin. My students' role was to write their poems and create the media that they wanted to use with the adobe suite and various other software tools. Based on their vision, I have been the one creating the patches with their media and poetry. So when we test and record the films, we are using my laptop and they are the ones moving in front of the kinect camera. It has been kind of clunky, but the kids have been happy with the collaboration for those particular classes (which are writing classes not computer classes).
This spring, I'd like to actually do a similar unit with my students but rather than building the patches for them, I'd like to teach them some basics on how to use Isadora themselves. So I thought that we'd rent Isadora licenses for a week for each kid. The school is willing to pay for the rentals so that is good, but I am having a hard time imagining how we will all share a my personal kinect camera in an efficient way, and it is definitely out of budget to purchase more kinects. On the other hand, if this one unit goes well, I may be able to convince the school to buy a bunch of perpetual licenses and more equipment so that I can do a much bigger project next year.
Anyway, I have this vague idea that I will create a standard patch to get the students started on their own laptops and then teach them how modify it with other actors. But how do I emulate the kinect input so that they can all work on their individaul projects simultaneously? I was thinking that they could use a mousewatcher actor as input to give them a rough idea of the interactivity while they wait for their individual turns at the camera. Or perhaps could I record a person's movement and have the kids use that recording with their patches while they wait their turns, but it seems like neither of these solutions would emulate the depth that we get with the kinect.
So I thought that I would ask you all. It doesn't have to be perfect. I just want a way to emulate or fake the kinect input so that all of the students can work simultaneously on their own laptops rather than us all sharing one. How do I do this given we only have one camera?
Thanks!
-
The idea of making a recording with OpenNI, is a good one, you will get the exact same skeleton data from the recording, and can share the recording to each student so they have control of it. That provides a lot of flexibility, but not the live element, or the control of being able to try specific moves on the fly.
Another option, which might suit you well is to use your laptop with the Kinect, and share all the Skeleton data to your students as OSC channels. The easiest way to do this would likely be to have an additional Wifi router, that you and your students connect to (not connected to the Internet or the school network). This wireless network would just share OSC skeleton data.
This would allow all students to use the same skeleton data at the same time. They may need to take turns interacting with the Kinect to work out specific cases, but should provide a good experience. -
Thank you! Those ideas are very helpful. I am going to test them out this week and I may be back with a few more questions, but I'm excited to realize that I can find a way to make individual projects work. I think these projects are going to be fun, and the students will learn more than they did in previous years.
Kirsten
-
@dusx said:
The idea of making a recording with OpenNI, is a good one, you will get the exact same skeleton data from the recording, and can share the recording to each student so they have control of it.
Additionally, if you have the default place that this saves to be a folder on your computer that is always synched to Dropbox or Google Drive, you can make recordings and the students can access and download them almost immediately.
You could also broadcast the live kinect depth video to them over NDI
-
These Kinect files of mine might be helpful as a starting point: https://www.dropbox.com/scl/fo/5vx98o6dne2e5g2hz0pyn/h?dl=0&rlkey=dcauvbj2893b674e3i8iix7wo
-
I would personally do a combination of the two. So recordings that you already did / + maybe the video footage of the RGB Camera that the Kinect has so they can see the image / see the puppet or whatever you wish to with the Kinect data.
And then a few scenes with live data! Either thru OSC / NDI / whatever works in your situation :)