• What is the correct use of the two Edge Blending actors?

    Edge Blend Mask:
    I used one actor for each projector in every scene.

    Global Edge Blend Mask:
    I haven't had a chance to play with this one yet, but is it correct to assume that one would have an activated scene at the beginning of the show where the edge blending would be done, and this scene would remain activated for the entire show?

    If so, how does this work with the "Stage Settings" window found under Output?

    Thanks in advance,

  • I'm back in the theatre now, workshopping a new piece and I'm getting a chance to really play around with the new feature set of 2.0.  I'm also a bit confused about best practice in terms of edge blending.

    I have a fairly typical setup: A 28'w by 9'h RP screen with three short throw projectors that I want to edge blend across the screen.  Since my setup isn't going to change at all, I've used the Stage Setup window to feather the edges of the projectors which works pretty well.  I run into problems, however, when I try and split a single video feed across the three stages.  I can't seem to find a way to do that while maintaining all the processing in the GPU.  In the old days, I would use a single video source (movie player or what have you) and run the output to three separate projector actors, each feeding a separate stage.  I'd then modify each projector actor to show the portion of the video feed that I wanted to make up the entire image (I'd use some alpha mask wizardry to get the blending right but that's no longer required).  Since the output stage is selected on the video source now instead of the projector, I can' use this method anymore.  
    I've tried (somewhat successfully) using three separate video players playing the same video file, but then there's issues with sync, as the videos don't always load at the same time when I initialize a scene.  There's also the issue that I have to do three times the work to get a single movie to play.  I also assume that playing the same movie three times over isn't very efficient in terms of processing.
    What I'd really like to do is set all the blending and converging and save that as a single stage, so that I could use a single projector actor to project on the whole screen.  I guess that I could syphon out to a different program like Mapio to do my blending, but I'd really like to do it all in Isadora.
    Does anyone have any suggestions?  I'm I completely missing the point here somewhere?

  • Dear @CitizenJoe and @CraigAlfredson

    Both of Citizen Joe's assumptions are correct: the Edge Blend Actor only operates locally -- that is, in relation to the Scene in which it lives. It really is a completely independent Mask -- it is rendered on top of the Stage after all the other images in that scene are rendered. In fact, if you have setup an edge blend in the Stage Setup window, this Edge Blend Mask would be drawn in addition to the Stage Setup mask.
    The Global Edge Blend actor sets the parameters once, and they will stay that way until a new Global Edge Blend actor is encountered. These settings override the settings in the Stage Setup window. In other words, if you encounter a Global Edge Blend actor, any settings in the Stage Setup window would be ignored, and the settings from the Global Edge Blend actor would be used instead.
    To see all of this in action, open a file with a default Stage Setup (i.e., it doesn't modify the output at all) and no edge blend actors. Then
    1) Go to the Stage Setup window, and set the right edge with to 20%; you'll see a black gradation on the right.
    2) Now add a Global Edge Blend Mask. As soon as you do, the 20% gradation specified in the Stage Setup window will disappear, because now the Global Edge Blend Mask will override it. Set the 'left width' input of the Global Edge Blend actor to 20%. Now, a 20% gradation appears on the left
    3) Now add a Edge Blend Mask actor. Nothing changes at this point, because the mask is goes all the way to the edge.
    4) Finally, set the the 'left width' input of the Edge Blend Mask to 20%. You'll see the existing gradation darken because the Edge Blend Mask is drawing a second gradation on top of the Global Edge Blend Mask's gradation.
    As I write this, I could see the benefit of changing the name of the Global Edge Blend Mask to "Stage Setup Override." This might help people understand. 
    But, perhaps you are starting to see that it wasn't really my intention that either of these actors be used at the same time as the Stage Setup window. Each offers an alternative approach of edge blending.
    Craig, what you're encountering is a problem that is fundamental to Isadora's previous behavior that will change very very soon. But it's going to be hard to work around it until this change comes.
    I've gone into a lot of detail below – maybe too much. But I figure knowledge is power, and if you want to understand the problem and the eventual solution – which will be available in short order -- you can read all the "gory details" below.
    But to answer your immediate question: if you want to split the image to the three outputs of a Triple Head 2 Go, then you must not use the "left/middle/right" options in the Stage tab of the preferences. Instead, you must let Isadora see the TH2Go as one big stage (i.e., 3072 x 768) and split the images yourself using the Projector actor. If you are on a machine that uses two or more video cards that are physically separate, then you really can't split a single vid-gpu image between them. (The reason is explained in the "gory details section.")
    Here's the problem: imagine, in your desktop machine, you've got two graphics cards. You load an image/frame of video onto GPU 1\. That image does not exist in GPU 2 because the two GPUs are physically separate. (Imagine two DVD players, each playing a disc; DVD player #1 can't magically get access to the image being played by DVD player #2 since they are physically separate devices.)
    That's why, In Isadora 1.x, you always had to specify the destination stage at the beginning of the chain when using the Core Image actors -- i.e., the CI Movie Player or CI Picture player, whose images also live on the GPU. I did this so that Isadora could to assign the video stream or picture to a specific physical GPU. When I started implementing the GPU functionality for 2.0, I continued this approach. But, because of this, an image destined for Stage 1 cannot be rendered on Stage 2.
    "But I only have one GPU on my laptop!" you say. Yes, that's true. But every Stage window is a separate OpenGL "context" because it might be on a different GPU. So, even with one GPU, those two Stage windows cannot share an image in the current 2.0 release.
    I have a solution to this problem, but I disabled it in the 2.0 release because it has major performance implications that we must test. This feature tells the OpenGL driver to upload every picture/frame of video to all the GPUs on the system. When you have one GPU, this won't have any substantial negative impact on performance. But if you have two or three... well, we really just don't know what this means yet.
    Still, it is 100% the way we have to go because the CPU way -- where you specify the destination stage at the end of the chain – is much more intuitive.
    I'm going to release an update to 2.0 version pretty darn soon... because people are finding subtle problems that need addressing quickly. In that next version, I will the solution described above as an option you can enable in the Preferences, with lots of warnings that it might lead to performance problems on multi-GPU machines. But, with this option, you will be able to specify the destination stage at the end of the chain again, just like you do with CPU based images. And then you'll be easily be able to accomplish the kinds of splits you're talking about in your post.
    Hopefully this detail gives you some insight to what's happening.

  • @CraigAlfredson

    as this issue can not be instantly solved, just a tipp for the 'hardcore workaround'. I did not wanted to edge blend, just the same movie on three stages, so I added two more movie players with speed 0 and linked the position output to the 2nd and 3rd. So at least the sync problem was gone.

  • Thanks @Mark

    I appreciate the gory details and I look forward to the new release.  I guess the only thing I would add as a feature request would be the ability to create a virtual stage that comprises of multiple physical projectors edge blended together.  That would speed up my process immensely.
    Thanks for the tip. I know there are many ways to work around the problem, but I was hoping there was a real solution, which Mark has said is coming soon.