Hi all
I have just helped install a multi-screen work for an artist at a gallery. Everything seems to be working great, however, a couple of times we have noticed the mouse cursor appearing on the screens. It's like it keeps popping back up!
Here's the setup
- 3 x mac minis (2 x 2018, 1 x 2014)
- all running OS 10.15 Catalina
- isadora 3.0.7
- five videos, full screen, on 5 screens
- wireless mouse and keyboards
My first assumption was that it is the bluetooth mouse and keyboard communicating / disconnecting. The gallery tech has told me that they have since "turned off Bluetooth on all the Macs and the cursors still show up".
I'm going to login remotely tonight and double check settings, but my feeling is that something is bring finder or another app to the front. If it's not the bluetooth, what is it? Is it the remote login software?
I just had a play with the Cursorceror app / pref (suggested in this post ( ...then again, there is the thing with the cursor), and that looks like it could be a really great band-aid solution.
But I'm keen to find the actual cause. Any ideas?
I'll update as I find out more...
I suspect the issue is deeper than just unsupported codecs. Isadora does scan the media to ensure file type and codec support, but unusual variations may cause issues. I have seen this mostly with downloaded media from archives etc..
My guess is h.264 and/or MP4 files are at the root. These can have a nearly limitless mix of meta data, audio codecs, and compression variations.
Playback will be more reliable/predictable if you ensure media is matching. I recommend processing your files with Shutter Encoder in batches before import. This will ensure you have consistency. https://www.shutterencoder.com...
Hi, i would like to use the audio level of a quicktime video that i am playing in the scene as a parameter to manipulate the video. I understood that what was used before for this purpose was the Freq bands property in the Movie Player actor. I do not see the option to display the Freq Bands as the output property in the movie player actor and i read in the previous forum that this became obsolete. Could someone please tell me what i could do? I have read https://community.troikatronix...
but do not know what exactly it means for me to do. If anyone could explain to me, i would be so grateful!
I am using Isadora 4.0.1, running OS 14.5. Thank you very much!
@reload2024 there are different ways of unwrapping the equirectangular video used for 360. I think the way you are working now uses a spherical projection. For your use case a cube map sounds more appropriate.
I think you can use ffmpeg to process the video to a cube map https://jiras.se/ffmpeg/mono.html
this should fix the distortion you see when you project into the square room.
I have a 3D video that I'm trying to map onto 3 walls of a large square room (3 40 feet x 9 feet walls). So when I do this the front wall looks good, but the side walls are horizontally stretched. Adjusting the FOV of the player to make it a lot wider helps, but then it no longer appears that you're in the video, you're too far back. So my thought is that maybe I have to unwrap this video using ffmpeg, which will probably work but maybe not feel like there's the same depth?
So I'm wondering if in Isadora what the best way to approach this is. For the record my stage is 7 projectors (3 on the front wall, 2 on the sides).
Usually I unwrap 3D videos but I only recently discovered Isadora could play 3D movies directly and was pretty excited by that, particular because now I can play regular 3D movies without modification. Except, of course, for this stretching on the side walls.
Isadora frequently crashes with a grey screen and request to send an error report to microsoft. Or it hangs indefinitely and locks up. The cause is always the same. I gather a lot of different types of video and if I try to import a file with an unknown codec this happens and I can lose my work. Isn't it possible to just check to see if the file is importable before going ahead and allowing itself to crash?
Hi all
I just thought I'd share something I have been working on as a tech support this last couple of weeks.
Frnch artist Bertille Bak has a show on at Vox Gallery, Montreal, featuring many video works, some with multiple screens. In fact there is also another multi-screen show next door by an artist Alexandre Larose.
Anyway, I helped setup a sync system for Betille's 5 screen piece, Mineur mineur (2022), using Isadora and 3 mac minis. I think the gallery had either used all their Brightsign players, or simply didnt have the time to wrestle with firmware etc (I still havent got to grips with Brightsign myself). So it was a good chance for me to learn and apply some sync techniques using OSC signals over the gallery network!
It seems to be running ok. The setup uses 3 mac minis connected to the network via ethernet, and 5 screens in portrait, and 5 HD videos (h264). Mac minis A & B play two videos each, and mac mini C plays just 1. There is a 'SETUP' scene with a 'preload scene' actor in it, followed by a 'MOVIE' scene. All Isadora patches end-jump to the preload scene once the movie has finished. (the films have titles / black at the end, so a seemless loop is not required). MAC A then waits 10 seconds, and then sends an OSC signal to trigger a jump back to the MOVIE scene for MAC B & C, and also jumps itself.
One small issue I had was that one mac was occasionally not recieving the osc signal to start - it turned out that I'd copied the wifi IP address for that one, rather that the ethernet IP address, so I guess it had been occasionally falling of the wifi network. Once I updated the address list with the ethernet address, it was fine.
I've also incorporated a few Data Array actors so that IP addresses can be read / edited from text files, and the delay / restart time is also here. I'm also using Data Array to write a log that records information to a text file every minute about the movie position, scene, and time. I can then access these via a remote desktop software (Splashtop) and make a quick comparison. Considering these are different machines (mac C is lower spec), they are holding up pretty well (see below).
anyway, if you get a chance to see the show, I highly recommend it - the work is darkly humourous - a playful approach to serious subjects (labour and working conditions), and extremely well presented - almost theatrically. The other work by Alexandre Larose is super too - though not
if you have vertigo!
MACHINE SCENE MOVIE POSITION DATE(DMYYY)-TIME(HMS)
MAC B movies 90.218697 5-9-2024-11-2-0
MAC C movies 90.2062 5-9-2024-11-2-0
MAC A movies 90.215096 5-9-2024-11-2-0
MAC B movies 96.709099 5-9-2024-11-3-0
MAC C movies 96.706001 5-9-2024-11-3-0
MAC A movies 96.7089 5-9-2024-11-3-0
MAC B movies 2.1086 5-9-2024-11-4-0
MAC C movies 2.1053 5-9-2024-11-4-0
MAC A movies 2.1092 5-9-2024-11-4-0
MAC B movies 8.6026 5-9-2024-11-5-0
MAC C movies 8.598701 5-9-2024-11-5-0
MAC A movies 8.6047 5-9-2024-11-5-0
MAC B movies 15.1008 5-9-2024-11-6-0
MAC C movies 15.0922 5-9-2024-11-6-0
MAC A movies 15.0968 5-9-2024-11-6-0
MAC B movies 21.592499 5-9-2024-11-7-0
MAC C movies 21.5868 5-9-2024-11-7-0
MAC A movies 21.593601 5-9-2024-11-7-0
MAC B movies 28.0889 5-9-2024-11-8-0
MAC C movies 28.077099 5-9-2024-11-8-0
MAC A movies 28.089401 5-9-2024-11-8-0
Dans Mineur mineur (2022), il est question du travail des enfants. Cette vidéo a été entièrement fabriquée à distance pendant le confinement : à Madagascar, en Inde, en Bolivie, en Indonésie et en Thaïlande, chaque famille a reçu de la part de l’artiste un kit pour enregistrer des images. Des enfants trimant dans des mines d’étain, d’or, d’argent, de saphirs et de charbon chorégraphient une scène semblable à une kermesse d’école dont ils sont privés. Leurs silhouettes ont par la suite été incrustées dans d’étroits boyaux miniers vus en coupe comme des fourmilières observées par un entomologiste. Bertille Bak construit avec ses sujets des actions collectives pour dire leur réalité autrement. Elle est de ceux et celles pour qui faire ensemble s’inscrit dans une vision du bonheur. Il en résulte des images spontanées, montées dans la rapidité du geste.
@fred said:
@pingdesigns if the sound file was created by the composer maybe you can get a bounce of the clashes only as a separate file (or better yet a multi channel wav file that has the full mix and the clashes separately. Qlab can play multi channel wav files and route them to different outputs on a sound card. You then only need to get a line out from the clashes audio track from the qlab machine and plug it into an input on the isadora machine. From here you can use the capture in isaodra to get the audio of the clashes and analyse that track alone. This will give you clean and isolated audio that you can use as triggers to drive your video effects.
Brilliant suggestion as always!
@fred said:
I know people do amazing things all manually cued but I, for some reason always prefer the precision and reliability of generated cues. I guess I am always scared of missing a cue or not being exactly on time so put in the extra work for automated workflows.
I also prefer airtight, automated solutions whenever possible, so your preference for them isn't unusual, but sometimes using humans is the only feasible solution available. I though, have myself been the human who pressed the button at the wrong time
@pingdesigns if the sound file was created by the composer maybe you can get a bounce of the clashes only as a separate file (or better yet a multi channel wav file that has the full mix and the clashes separately. Qlab can play multi channel wav files and route them to different outputs on a sound card. You then only need to get a line out from the clashes audio track from the qlab machine and plug it into an input on the isadora machine. From here you can use the capture in isaodra to get the audio of the clashes and analyse that track alone. This will give you clean and isolated audio that you can use as triggers to drive your video effects.
I know people do amazing things all manually cued but I, for some reason always prefer the precision and reliability of generated cues. I guess I am always scared of missing a cue or not being exactly on time so put in the extra work for automated workflows.