@woland Thanks - I didn't know Text/ure was outdated, good to know. The manual I am looking at is from 2019 so perhaps that's why. I'll try Text Draw instead.
@jfg that is clever, it does work too. Though I don't love using little micro delays, feels a bit like a hack and like something could easily get out of sync wrong if the computer stutters... But it does work and is simple. And that counts for a lot.
It's encouraging to me as a beginner that the community has found so far three different ways of solving this: I appreciate the help!
thanks @dusx your suggested change means the trigger doesn't happen until after the delay, whereas i'm wanting the trigger to pass immediately and then wait until after the delay until the next trigger passes through
Currently OSC can't be assigned in the Edit Go Triggers dialogue, however, how I have worked around this in the past is to set up a MIDI trigger in this dialogue and create a User Actor that receives OSC, and sends MIDI (to a MIDI loopback).
Of course, you need to add the User Actor to every scene (or a constant background scene).
I'll make sure we have, adding OSC to the Go triggers Dialog, in the feature requests!
I don't really know what delay you are seeing, but it sounds like it's probably due to the Film being preloaded for playback. This time will vary due to system resource changes.
You could trigger Isadora to Start the scene and pause Ableton (if possible), and then again trigger Ableton to continue from Isadora (once playback has begun). That might eliminate the delay, as Ableton shouldn't require any additional load time.
Yes, if I remember correctly the decision was made to save the Stage Setup with the Isadora file. If you have a setup you need to use in many files, you may want to create a 'Template' file with your stage setup, and then cut and paste you v2 scenes into these new v3 files. Allowing the Stage Setup to be saved as external config files and loaded into files is something we hope to implement moving forward.
Actually... this converter works so well the trick might be just to work in ISF land and convert, worry about the details later. It'd be nice to have some more documentation so if it exists I'll take it, but I can make progress now. Thanks again.
You can also change the default font size of Actors and Controls to make all that text bigger if you go to Isadora > Preferences > General tab (the first one) > User Interface > Actor/Control Font Size
The easiest way to do this right now would be Python I believe, since there are definitely libraries out there for recognizing QR codes, and you can feed video into the Pythoner actor. Feel free to send in a support ticket using the link in my signature if you'd like to try out the beta version of the Pythoner actor.
I successfully tested birdog cameras with the visca alpha actor. And tomorrow morning I'll test this amazing guy !!
Datavideo PRT10 Mark II robotic pan tilt head with motors that can control zoom and focus. The amazing thing is that the movement is very smooth. Much better that the Nirdog (In the Birdog the movements seems to have les steps). Plus the Birdog cameras, (that I like a lot by the way for certain uses) will never be as good in theatrical conditions in low lights. Datavideo heads allow me to use our Blackmagic Pocket 6k pro with Canon Series L f2.8 lenses that have a high dynamic range and are super il low lights. I'll report here the findings. Prior fast test didn't allow me to control it. I hope it will be better tomorrow.
@woland Thanks the help and for sharing this solution!
Is really close for what I'm looking as solution besides the fact that I will need the Data Array actor or the Read Text From File actor because I have a considerable amount of text to be screen. Thank you so much for all directions!
@jfg thanks for this solution too, it is perfect for a more minimal work with words&sound, I must have some scene with your example too
@ril Thanks for the enthusiasm! I'm from Portugal, so I can speak that "portuñol" mix... ;-)
sure, but you're going to need some hardware if you want to generate it live. a depth camera such as Kinect or Orbec Astra or Realsense will give you a human shape that can then be processed with the QC ASCII Art actor (if you use Mac) or a GLSL Shader like this: https://www.shadertoy.com/view...to produce something similar to that effect. The term Projection Mapping, however, tends to refer to something else, unless you are aiming to map the image onto a moving body - which presents an additional set of challenges.
The simple version would just trigger once (maybe every scene).
Advanced would be a near frame sync. @Michel once offered a syncing user actor. It is made with Version 2. but you can use it as a how to. Instead of connecting the player directly, you would connect them over OSC.
If it is meant for hot backup running parallel, I usualy biuld a 'deadman switch'. A puls is send every other second, resetting counter at the remote computer. If the connection is lost, the counter won't be reset and trigger something to take over the session automaticaly.