attached is a a macro for performance monitoring, we wanted to judge the impact of different scenes on the mac mini performance. So we placed this in a background activated scene.
One benefit is if you have a unattended installation, every time the background scene gets activated (restart, reboot, …) a new timestamped file will be created.
perhaps this helps somebody
We found in productions that entering a realtime text through Isadora (where we control the rest of the media in the smartphones) is hard.
So I quickly created a standalone program to easier send text to the Augmented theatre App.
The demo of the program is here:
The program (OS X) can be downloaded here:
It works on port 3000 and broadcasts to 255.255.255.255
Give it a go..
Question: Are you currently working on building the Isadora Patch for this project yourself? I've been meaning to play more with the Kinect and skeletal tracking via Processing for a while, (since it has been on my ever-growing list for a number or months), so I could dive into building a skeleton of an Isadora Patch around your concept if you'd like.
Our performances are generally composed of pre-determined sections in which we all improvise.
Thanks for sharing that. I was really interested in the use of disruption by the dancers and musician, how they would at times invade the space of each other and appear to undermine the performance. Similarly, the manipulation of the film disrupted its flow and montage.
We use a lot of structured improvisation in our performances. But I do find it challenging to develop visual systems that can approach the fluidity and variation that is apparent when dancers and musicians improvise. Using live feeds to transpose the improvised moment is one obvious solution. Using generative visuals that respond to the dynamics of the performance is another approach. I am still looking for a technique for a visual engine that can match the nuance of improvising dancers and musicians.
Conversely, the challenge of developing responsive systems has also formed in me an appreciation for an approach which is more controlled and defined - storyboarding, scoring and rendering a fixed video file for playback-. However, as GPU performance improves and we develop the capacity for responsiveness and real-time rendering our visual engines will appear more and more alive to improvisation.