Our performances are generally composed of pre-determined sections in which we all improvise.
Thanks for sharing that. I was really interested in the use of disruption by the dancers and musician, how they would at times invade the space of each other and appear to undermine the performance. Similarly, the manipulation of the film disrupted its flow and montage.
We use a lot of structured improvisation in our performances. But I do find it challenging to develop visual systems that can approach the fluidity and variation that is apparent when dancers and musicians improvise. Using live feeds to transpose the improvised moment is one obvious solution. Using generative visuals that respond to the dynamics of the performance is another approach. I am still looking for a technique for a visual engine that can match the nuance of improvising dancers and musicians.
Conversely, the challenge of developing responsive systems has also formed in me an appreciation for an approach which is more controlled and defined - storyboarding, scoring and rendering a fixed video file for playback-. However, as GPU performance improves and we develop the capacity for responsiveness and real-time rendering our visual engines will appear more and more alive to improvisation.