You are right, it depends from the footage: sometimes it can be more or less acceptable, but most of the times it it not, unfortunately.
I actually do some sort of dithering using various types of noises (sometimes with the Shimmer actor), but that does not work all the times and on everything; one cannot achieve the same type of results as motion blurs in After Effects or even in Jitter, or what one could do in the past with Trails in GPU with Isadora 2.6, or TimeBlur in CPU, etc. Without speaking of the good old Wave from Peter Warden.
You could create an User Actor that wraps the Jump actor, and uses some input to Gate on/off the Jump feature. Maybe use some Global value to switch the Jump state. Of course you would then have to always use this Gated Jump as your Jump actor.
That sounds right. The development cost for effective onstage LiDAR tracking is likely to be higher than either of the following options. iOS LiDAR (2020-21) has a reported range of around 5 mtr compared to the Intel Realsense L515 LiDAR that is around 9 mtr. Comparing the cost of ownership (purchase cost + software) the Realsense LiDAR is more cost effective, particularly for Windows systems. If you are an apple user and have the LiDAR equiped iOS device already it is going to be a great prototyping / proof of concept tool before investing in the higher cost required to track a larger performance area.
You may remember I made a feature request (see below) some time ago to allow the importation of timeline markers from movies created in NLEs (eg) Adobe Premiere and After Effects. The timeline marker info in contained within the header info of the video file.
Why would this be useful? In your NLE you can easily lay down markers to a beat. You can also see the waveform of an audio track and visually sync your markers to that for fine tuning. If one could bring the exported video file into Isadora with the markers intact, and there was a marker-trigger actor it would then be trivial to trigger events at the exact point in movie playback.
The recent improvements in timecode function in Isadora has made timecode triggering much easier, but you still need to find the exact point, set up a comparator actor, etc.
@juriaan For the sake of leaving a concrete example:
Imagine I have two pieces of software or hardware, maybe two x32 sound boards, and I want to have bidirectional control of each (maybe if I move a fader on one board, it moves the corresponding fader on the other). Both boards, being identical, obviously use the same OSC API (same OSC addresses, same payloads). If we point both boards at Isadora port 1234, Isadora's channel system has no way of telling us which packet came from which board. It makes it impossible to build this solution unless an intermediary piece of software, or multiple Isadora files on different IPs, are used.
A simple OSC Direct Receive actor would go a long way. I've spoken with Mark before about how I would imagine it working, including OSC Wildcard syntax, where the actor could lean into the tree syntax of OSC and be given partial or complete address filters to receive info. But even just starting with direct receive on a given port would be very helpful.
At this time upto 16 audio channels can be used. This applies to sound routing using the Sound Player actor. I think this limitation was only introduced for UI reasons. I will add a feature request to open this up further.
Thanks for the explanation. There's many layers of data structures that need to get serialized and deserialized. Not knowing the internals, it's hard to know what it would take, so that's very helpful to hear from you about it.
We are working hard to improve our audio system and live capture in Isadora. Please submit your features requests via the Support link in my signature. We evaluate each and every feature request we receive and have improved our planning process many fold over the past few years, I am very excited about the next versions of Isadora.