Thanks, Skulpture! I've managed to write my first Applescript that takes the clipboard text (from an app that automatically copies the name of the artist-tracktitle into the clipboard) and gets it into isadora as text and I can trigger the script with osculator, hopefully.
I'll continue looking for a more efficient way. It would be nice if the clipboard could be monitored so the script would run on every change.
thanks for your thoughts as well! Here's the article from anandtech that explains how scaling works on the retina macbook: http://www.anandtech.com/show/6023/the-nextgen-macbook-pro-with-retina-display-review/6
"If you select the 1680 x 1050 or 1920 x 1200 scaling modes, Apple actually renders the desktop at 2x the selected resolution (3360 x 2100 or 3840 x 2400, respectively), scales up the text and UI elements accordingly so they aren’t super tiny (backing scale factor = 2.0), and downscales the final image to fit on the 2880 x 1800 panel. The end result is you get a 3360 x 2100 desktop, with text and UI elements the size they would be on a 1680 x 1050 desktop, all without sacrificing much sharpness/crispness thanks to the massive supersampling. The resulting image isn’t as perfect as it would be at the default setting because you have to perform a floating point filter down to 2880 x 1800, but it’s still incredibly good."
When mapping the cube on stage, I had a projector with 1024x768 resolution, the movies and stills were not larger than that. In the first scene there is a picture of a bird mapped on one side of the cube, the next scene has a video in it which is attached to 3 cornerpin projector as it is mapped on 2 sides of the cube, plus the back wall of the stage. so there is one movie and one still photo playing when transitioning between the two scenes. I'm sure the resolution of than picture plus the movies could even be reduced, given a total resolution of 1024x768, but I don't think this is the problem..
I haven't worked on this show for a couple of weeks now and will experiment a little with it as soon as I can.
Thanks for your insight, I'll try different resolutions and target fps!
Better than using /isadora-multi/1, use the standard TouchOSC your own custom IDs (not /isadora or /isadora-multi) and then set them up in the Stream Setup window.
For example, using one of the standard TouchOSC setups:
1) Open Stream Setup window from Communications menu
2) Click "Auto-Detect Input"
3) Move the controls on TouchOSC to which you wish to "listen" -- these will appear in the Stream Setup window
4) Assign port numbers to the inputs
5) Close the window
6) Add OSC Listeners to listen to the data.
When I do as described above, and move the X/Y controllers around, I see this in the Stream Setup:
You can then click "Renumber Ports" and it will assign the ports 1, 3, 5, and 7\. Why is it skipping numbers? Because these OSC inputs provide two numbers -- X and Y. So, for the first address (/3/xy2) you will listen to X on OSC port 1, and Y on OSC port 2.
Hopefully that's enough to get you going.
DusX speaks the truth: the unconnected inputs of a MultiMix actor do not consume any resources at all. Why can you adjust the inputs? Because there are basically eight different functions inside, each one optimized for the number of inputs you provide (i.e., the first one processes one image, the second one processes two, and so on.)
Unfortunately I don't know this system. Does the audio come in via Soundflower? Or? If you see your device/system as an option in Audio Midi Setup, it should also appear as an option in the Audio section Live Capture Settings window.
Maybe you can tell us a bit more about how this system works so we can help you better.
Hey Jamie! I really like using mapio. If you look in my other post about 3D quad distort/edge blending I give an outline guide about how to get it to work. It might be overkill and limit you a bit since you have to use a syphon patch always active send scene. I'll pm you my phone and don't hesitate to call! I'd like to talk to you about trying to get mapio just working as a QC actor in izzy.
I know that I sometimes have trouble with FF actors not outputting correctly formatted video, that must be run through a fader actor (lowest possible setting) to get it to be rewritten as a proper isadora format video feed.
Maybe you are having a similar problem? I use fader for this now because it requires no other inputs, but really any of the mix/blend video actors should correct such a problem.
This multiple projectors solution is in fact something that I do a lot to get that effect.
I recently found, however, that I wanted to recreate not just a luminance adjustment, but saturation and hue shift as well, in pre-rendered video. Levels adjustments seem to get close to the behavior of Izzy HSL luminance adjust, but I'm wondering what the actual operation is to get it exactly :-)
This is also partly a question of just curiosity about how the luminance adjust is defined in HSL adjust.
Try disabling all QC modules by going to Isadora Preferences > Video > Quartz Composer Plugin Load Options and disabling all of them. Then see if the FaceTime comes back. If so, then one of your QC plugins is "stealing" the camera before Isadora can acquire it.
For what it's worth, in the next release I've changed the code to grab the Live Video Input **before** the QC plugins are loaded, which should avoid this problem in the future.