I have done that in a past life (4 years ago) when I was using TCP Syphon to send video signals to RPI 1B+ units. Unfortunately, my current receiver setup (https://dicaffeine.com/ running on a mix of RPI 3B and 3B+) don't have that capability. The units are already at 75-80% CPU just decoding the SD NDI streams and the software doesn't support cutting things up.
Another option I am consider is forking my network and playback system into two layers. A master Izzy playback computer would send 2-4 NDI streams containing half or quarter of the wall's imagery to 2-4 matching second tier Izzy computers over network A. The 2nd tier computers would then cut those images up and pop them out to all their respective RPI receiving nodes over network B. Maybe a little crazy? Maybe could work?
thanks everyone for your replies. ndi and yams work indeed. much appreciated. however i am still kinda intrigued about the sidecar option because it actually semi-works.
here are two screenshots for the same file with the macbook and ipad connected via sidecar but different screens assigned as "secondary". when the ipad is assigned as the primary screen or "display 1", everything works smoothly, I can use my ipad as my main display and control the scene while the macbook screen is used as the second display.
case one : workswhen ipad is display 1 and macbook is display 2
however, when the ipad is assigned as the second display, the ipad screen turns white instead of displaying the stage it's assigned with.
case 2- not workingwhen macbook is display 1 and ipad is display2
sorry if this was already clear, but I wanted to make sure that I was able to explain the problem. i'm risking repeating myself because it seems to be a minor problem that might actually be a bug that could be solved with a tiny little patch?
Just want to thank you all again for all your help and insight. We're opening our show tonight, if anyone is interested in viewing the piece you can stream directly on our Twitch! We have two showings, tonight 20:00PST (Vancouver/LA time), and tomorrow (Sunday) same time.
Fingers crossed that Skype continues to behave for us! All of your tips and suggestions have been immensely helpful.
@dusxI I have actually done just that now. I did test sending directly from my system over a hardline into a capture card for the streaming server. I then tested setting my stage output to spout and using Spout to NDI to send the feed via network to the same streaming server. The Spout to NDI handled things the best just slightly. Which I found surprising.
Explored some more and got some feedback for everyone (still working out the kinks but figured I should report back). I am currently using VB Virtual Cable which allows to connect virtual sources. I selected the virtual cable on the audio out field in the NDI Watcher Beta and then selected the same virtual cable in the live inputs. This allowed me to bring the audio from the individual NDI sources to the live inputs in Isadora. Still a few kinks (the separation between the various sources is tenuous because of audio feedback and the responsiveness of the connection requires a couple of restarts) but definitely a work around!
Recovering Isadora 3.07 has been a big help in trouble shooting the stream setup to Blackboard Collaborate. I am still getting intermittent black video where it appears the stream is previewing in the Live Capture Settings but not then displaying through the Video In Watcher. After trying to pin this down, I think it requires a specific sequence of activation steps to maintain the through put of the video. I tried a bit harder with direct audio through the camera and can only conclude that Blackboard Collaborate is noisy and without a convenient way to monitor the audio stream before going live. I did manage some improvement by recording the stream and playing it back-that appeared to be the only way to know what the stream sounds like. Anyway after setting the sensitivity of the Senhieser radio transmitter to -24db I was able to reduce some of the noise. Monitoring audio through the camera sounds fine and through Isadora sounds fine, so my conclusion is that Blackboard Collaborate does not handle audio that well. But I will keep trying to get clean audio.
I am hoping to get the video and audio stream to run through Isadora, and I will keep trying to get a stable patch configuration.
@vixmedia Not at all! ZoomOSC extends the vanilla zoom client, so you just log in using your official Zoom account and the software will inherit whatever privileges you have in your existing Zoom account. Now, depending on the type of work you are doing, having a paid account might give you helpful features like webinars or multi-person meetings over 40 minutes in length, but we don't impose any requirements on account level ourselves.
PTZ cameras are one device option to look at. I am thinking perhaps the proliferation of gimbals is another - somewhat related device - that can be considered for Izzy plugin development. I have a DJI Ronin M and a Ronin S gimbal and just a quick comparison of cost suggest these possibly provide good value and flexibility compared to the PTZ camera lineup - admittedly they are a BYO camera option. The PTZ cameras appears to have some shared protocol, but the diversity of gimbal makes and models are likely all proprietary. Still I would definitely use an Isadora Plugin that implemented gimbal control as an alternative to acquiring a PTZ camera. Specifically because the gimbal allows any number of camera types to be mounted and offer comparable functionality to PTZ.
This is where I started having questions about the alpha channel in the NDI output. It seems like having an alpha channel in an NDI feed takes more system resources/bandwidth, and with so many NDI sources going, it seemed like it would be good to have the option of not having the alpha channel in the NDI output if it's not necessary.
Well, it's just the one feed from Isadora in the end -- obviously it's not the eight feeds from vMix. But I will work on adding an option to the beta that allows you to disable the alpha channel on the output.
But if you need the frame rate fix immediately, please contact open a ticket (which will get to @Woland) so that you can become a beta tester. The team does not yet have the fix you need, but I can get it done this week for you.
Because we've primarily working on enhanced audio features, I feel the beta will be quite stable if you're not digging in to those particular features.