Some questions after testing 2.0
Hello all TroikaTeam,
I was hardly waiting for 2.0 because I have a project in october where I will need the new mapping capabilities of this great tool.
I played with it today, and it's awesome. But I have some questions :
I think I understand the video-gpu, video-cpu, video-ci, thing but Mark, you said "Take note that there is a performance hit when you use vid-cpu because the GPU image must be converted to a CPU based, bitmap image. This is costly and should be avoided."
Does that means that when I play a PhotoJPEG movie with player output vid-GPU connected to a texture projector, it will be less efficient than before ?
Is it played by AVFoundation (in performance optimization mode) or is it forced played by Quicktime ?
Another thing I don't understand is : when I do a mask and a mapping with a projector, when I change a parameter of the projector actor (not inside mapping window) H position by example, the whole image+mask+mapping are affected. If I want to move or zoom the masked and mapped image, I can do it from the mapping window and/or publish the parameters I need. In my logic, the settings (position, zoom...) of the projector actor should be pre-mapping .. no ? (image can be moved or resized but the mask doesn't)
Another thing I don't quite understand :
If (for example) Isadora's processing resolution (preferences) is 1024768 and I play a 1024768 movie on a projector which is 1024*768 what's the purpose of the output resolution setting of the mapping window ?
For my project I will apply a complex mask and mapping to the whole output. It's something I can do now with the projector actor but I have too many of them and several scenes. Each time I'll have to adapt the mapping of the show in a new place, I'll have to copy and replace all my projector actors and rewire. I tried to copy/paste my complex mask from one projector to another but it's not (yet?) supported.
I tried to broadcast to a "projector scene" but this doesn't work with crossfading between scenes and doesnt support alpha and layers. I tried to do an user actor projector but when I update, others parameters (like layer n°, blend..) are updated too.
Could it be possible to implement same mapping capabilities as the projector actor to the stage setup window ?
I hope my english is clear.
Thanks for all your hard work, I'm working with Isadora since 2007 and 2.0 is a huge step.
All the best
To answer your specific questions:**_Does that means that when I play a PhotoJPEG movie with player output vid-GPU connected to a texture projector, it will be less efficient than before ? Is it played by AVFoundation (in performance optimization mode) or is it forced played by Quicktime?_**If you choose "performance" in the optimize input, the Photo JPEG movies will be played by AVFoundation. I haven't done elaborate tests, but it seems that AVFoundation isn't so much better than QuickTime when playing PhotoJPEG. This is really a "gut instinct" and based on no facts whatsoever! So please take this opinion with a grain of salt. But it does seem clear that the very best performance comes with H264\. If that is possible for you, you may want to take the time to re-encode your movies.**_With regard to your questions about the Projector actor -- i.e., changing the 'horz position' input, etc..._**The Projector actor would be considered _post_ mapper. Think of it this way: the Mapper produces an image -- just like a movie player or picture player. Then you can do stuff to that image using the standard inputs on the Projector. This is in one sense "overkill" -- but when one uses IzzyMap to make interesting shapes and warps (something I personally am more interested in than Projection Mapping as we generally think of it) then having those controls on the Projector become very useful. Certainly you could publish parameters from the mapper to do this, but, if you simply need to move the whole image, then I'd just go the easiest route and using the inputs on the Projector actor._**What is the purpose of setting the resolution on the output of the mapper window?**_So that when you use the arrow keys to move a point inside the mapper, it really is moving 1 pixel. I need to know the resolution to be able to calculate that.**_[CONTINUED IN NEXT POST...]_**
**_Could it be possible to implement same mapping capabilities as the projector actor to the stage setup window?_**Yes, that's possible -- but really, what we need is a "global output mapper." This was discussed when I brought together Isadora users in Berlin, but more features mean more delays. In the meantime, I would propose that you work around this need by doing the following: place your Projector actor, which does the mapping, inside a User Actor. Then, when you edit the map in one User Actor, and you choose "Save and Update All," all of the copies of that User Actor will be updated to match. Not quite as elegant as what you suggested, but very workable.Oops.. I just read that you tried a User Actor, but that "others parameters (like layer n°, blend..) are updated too." But then, why don't you just add user inputs for the things that you need to customize for each version of the Projector? I guess I'm not understanding why this isnt' a viable option. Maybe you can explain a bit more to me on this.**...to copy/paste my complex mask from one projector to another but it's not (yet?) supported.**This should be supported. If not, file a bug report please. We'll fix it so that copying and pasting work as you'd expect.Probably, a good option would be a command that allows you to select a Projector and say "Copy Mapping" and then to go to another Projector and say "Paste Mapping." That wouldn't be too hard to implement actually. (File that one as a feature request if you think that would help.)All the Best,MarkP.S. You English is excellent!