@liannemua seconding the previous speaker. Whenever I need an HQ software-based recording I use Syphon Recorder (on Mac). Does a terrific job. Play around with the settings a bit, do some tests regarding your hardware, and you should be good to go.
Obviously there’s also the hardware route utilizing capture cards on other machines, and/or if you’re using SMPTE standard res/fps combo’s standalone equipment like Atomos or Blackmagic Design Hyperdeck. This would take a bit of research and - I won’t lie - funds.
Sorry for the late response! Thank you for your advice and recommendations. I'll take a look at these dmx servo controllers. I already had an Arduino board. I ordered a DMX shield last week and plugged it into the board. The wiring is extremely simple and the code works perfectly so everything is ok for me ! Now I'm designing a little case in 3D printing !
Just closing the loop on my exploration with the Perception Neuron Pro (2020 model) and Isadora 3. After experimentation in Dec '20, I was able to get real-time XYZ coordinates for each tracker on the PN Pro into Isadora as a separate OSC feed via Unity + an OSC Unity patch.
@mark Thank you for your reply. My need is not urgent. It is part of my Isadora toolkit development process to create a collection of user actors and associated control panels that are responsive to ambient sound & movement. Their outputs will be used to generate projections in realtime that I call New Social Landscapes.
My issue is that I have a shader which I sometimes use in Isadora (with GLSL) and sometimes in Millumin (ISF) - there are a couple of minor differences between how the two shaders need to be written and so - to make it easier to maintain and update - ideally I would add in some #ifdefs to separate out the lines for each implementation.
My current attempt can be seen at https://github.com/LiminalET/Z... - if you add in an #define GL_ES at the top then it runs fine in isadora, however I can't find a macro that will evalulate to true to put here. (GL_core_profile doesn't appear to)
My main issue is that the ISF shader creates the variables itself through the json blob, so I can't re-declare them, whereas isadora needs them decalaring globally so it can populate them.
If isadora could pass a defined macro in to the script that I can evaluate that would be the ideal, but any other thoughts would be appreciated!
I'll come out of the woodwork and state that in this specific case @peuclid is referring to a new feature that Liminal is adding to ZoomOSC that allows for the sending of full frames of information about every user in a Zoom call so that various stats about that user can be leveraged simultaneously for the sake of reconstructing user profiles within the integration platform (Isadora in this case). We are sending these successive OSC packets under the same address because our next update will be moving the software to a more "pure" implementation of OSC standard practices. @DusX has a great solution for "analog" input where dropping a few samples here or there would not create a noticeable difference, but because these packets describe user profiles, any loss would be reflected as a gap of all statistics for a given Zoom participant, which is an issue for online performances. The other software solutions we integrate with do not seem to face these challenges; maybe they log the OSC packets in a different way.
I'm going to meet in the middle here, adding a parameter to the new user interface of ZoomOSC to set the output sending rate such that, when used with Isadora (and when leveraging the General Service Task feature as needed to find a happy medium) the programs will communicate successfully. We love Isadora and want to make sure it can fully utilize our new features.
I have a lot of respect for what Mark is doing to make OSC accessible to the end user in Isadora, and I think I can make some specific feature requests to TT that could ideally retain the intuition of the Isadora OSC workflow but add more compliance with the OSC standards (sanitization of args and addresses, how packets are stored and constructed, etc.).
I have had better experiences using the Blackmagic design codecs (included with their video capture software - free).
However. At 5 mins you will likely hit the Avi containers max filesize... This will truncate your video.
I would use OBS to record a Spout feed from your stage. This would record realtime, so you patch needs to run smoothly in real-time. The nice thing about this approach is it off loads see work from Isadora, so you have more cpu space to use.
I was curious about the idea, if there is MiniDP to DP adapters, shouldn't it work the other way around as well?! So here is just some Adapter fun 🤪; If you already own a USB-C to DP adapter, you could go with this DP to miniDP adapter (No garantie that this realy works 😜).
It is a really powerful feature. One thing to be aware of is that turning On the User actor, triggers the INIT function of all contained actors. This is actually very useful, but can be confusing if you are not aware. I suggest using the User Actor on/off within the user actor as a way of seeing the INIT occur (note. you can copy this actor to the main scene patch, and force the re-INIT of all actors as well... super helpful)
I've just added a wave generator and a color maker HSBA which does the trick! Thank you!
Glad to hear you got what you wanted. You need a program that understands the 3DS model format to open the star. I use Cheetah3D because it is simple and easy for someone who doesn't spend a lot of time working with 3D programs. (Blender, for example, is free but massively complex and I don't really dig the user interface.)