Wonder Dome - a bit about a huge project at Arizona State University

  • **Intro**
    Recently I was a part of the thesis project of [Daniel Fine](http://danielfine.net/) and [Adam Vachon](http://http://www.adamvachon.com/). Both of these gentlemen are about to be graduates of Arizona State University. Dan’s MFA is in Interdisciplinary Digital Media and Performance, and Adam’s MFA is in Performance Design with his concentration being in Lighting Design. As they approached their final year in their respective programs they wanted to tackle a thesis project that both had a large scope, and pushed them past their boundaries as designers and practitioners. One of the central ideas that Dan had been exploring during his time at ASU is working inside of immersive projection environments. In this regard he was especially interested in working inside of domes. It was partially out of this interest that the idea of Wonder Dome was born.
    **The Team**
    One of the central questions for a project like this is, “who do you recruit to work on it?” Dan started in the Spring of 2013 by gauging interest in the project and asking interested team artists to commit to the project that was slated for the following Spring. The central design / programming team was Daniel Fine (Direction, Media Design), Adam Vachon (Lighting Design), Alex Oliszewski (Media Design), Matthew Ragan (Media Design, Programmer), and Stephen Christiansen (Sound Design). To this central group of people the team would quickly grow to include scenic design, costume design, production management, and so much more. From the beginning, Wonder Dome was a labor of love and a team effort at all times. 
    The Story
    All of these developments were, of course, happening while the story for the production was still in the oven. We had talked about a number of different adaptations or stories to draw from for this first production, but nothing felt like it was sticking. Dan eventually brought in his writing partner Carla Stockton to help give the story some real traction. We eventually ended up with a story that was loosely based around the fairy tale of the three little pigs. Pinky (a pig puppet), the oldest, suddenly finds himself in a fairy tale with the wrong ending and sets out to find a story teller to fix his story. He runs into the story teller, and Leo (the dome personified as a character in the play) who then lead him on a wild romp of misadventures as they run from a coyote that’s been masquerading first as the big bad wolf, then as the whole from the Pinocchio story, and finally as the giant from Jack and the Beanstalk. The production involved real puppets, digital puppets, and interactive moments. 
    To make all of this work we started by first thinking about what we need our system to be - what were the requirements to make all the moving parts actually move? Media ended up with a distributed system that was spread across four computers - three macs and one PC. Our media server, the PC, ran our warping and blending tools as well as the show control software. Each Mac was the system for a different puppet in the show. 
    Leo, the character of the dome, was driven by Faceshift - a program that uses a kinect and to connect a performers face to a digital puppet in real time. 
    Pinky, the pig, was both a live puppet and a digital puppet. While trapped in the belly of the whale, Pinky gets pigsilated, finding himself suddenly trapped in Leo’s digital world as a digital facsimile of himself. Pinky’s puppet was created as a series of animation sequences made first in AfterEffects. These short movies were then driven by Isadora a Wii remote and Osculator. Alex Oliszewski was the animator, programmer, and magician for all of the puppets, and it was a joy to see him in his element programming in Isadora. 
    The Coyote, our villain and later friend, was also a digital puppet driven out of Isadora in the same fashion as Pinky. All of the media team has a close relationship with Isadora, and have used it in some capacity in almost every show. It was becomes of this that we reached out to Mark to see if there was any chance we could install some temporary licenses on the machines used for the puppets. Mark very graciously accepted, which allowed us to drive our digital puppets with Isadora. 
    One of the central questions we kept returning to was, “how do we make all of these things work together?” That’s a big question, and one that was incredibly difficult to answer. We explored a number of different solutions, and finally decided that in addition to using our media server to drive multiple outs, we needed to be able to capture multiple streams of video simultaneously. To this end we started considering a number of different black magic studios solutions. We finally opted to install three Black Magic Intensity Pros in our media server. This was a difficult decision, partially because of cost and partially because of the number of PCI slots used, but was ultimately the right one for our server. Our puppet machines were a combination of borrowed 2009 Mac Pro’s and Macbook Pros from ASU. While having this equipment for free was awesome, one of the obstacles we discovered was that the ATI cards installed on these machines didn’t broadcast in a standard that our Blackmagic cards could capture. This meant that we also had to install Intensity Pro’s on the sending towers - our borrowed laptops were sending broadcast standard just fine. 
    Another challenge here was how to send the video. For our Isadora machines this wasn’t an issue, but this was a huge challenge for our machine using FaceShift (Leo). The FaceShift machine needed a second virtual screen to display only the puppet face. To achieve all of this we used a combination of different tools. We started by using Syphon Virtual Screen to create a virtual screen that was output as a syphon stream. We then used Black Syphon to send this stream to our BlackMagic card, which was then captured by our media server. 
  • Izzy Guru

    Nice report - thanks :)

  • Is the project completed? Can we see something from it??

  • We're still compiling all of our documentation from the show - whoa golly was it a wild one. In the mean time there's a little bit that's up on the web right now:

    [There's the UStream of the show here](http://www.ustream.tv/channel/wonder-dome?utm_campaign=www.facebook.com&utm_source=ustre.am%2F1byX4&utm_medium=social&utm_content=20140409143443)
    [Dan has a short interview here](https://vimeo.com/89351419)
    [The Facebook Page for Wonder Dome also has some great photos up](http://https://www.facebook.com/WonderDomeUSA)
    [Inside Wonder Dome - more about the work we did with TouchDesigner](http://matthewragan.com/2014/04/09/inside-wonder-dome-touchdesigner/)
  • Izzy Guru


    Thanks for the interesting report. A few weeks ago I had a meeting with the guys from FaceShift. We are using the software for a research project, I am quite excited about the possiblities.


  • Yikes!

  • Izzy Guru

    Wow http://www.faceshift.com/ looks amazing!!

  • Faceshift was pretty fun to use, though in the spirit of full disclosure there's a lot of configuring to do to make sure that it works well. Native syphon support would have really made magic for us. Instead we found a decent work around with a virtual screen. We also made sure the model was on a background that we could key out from our media server - allowing to get some isolation of just the puppet.