Audio Features for Isadora: What Do You Want?
-
@fred oh and also an updated audio player, that has the same kind of interface/input parameters as the movie player currently has, I find myself making movies of audio almost all the time so I can use the movie player instead of the sound player.
-
As far as I know, Isadora offers routing MIDI in and out of Isadora. When it comes to manipulate MIDI, we are using DAWs like Ableton LIVE or others.
And maybe that is the most efficient way when it comes to making musik, since Mark and the crew has to priortise their efforts to improve Isadora.And it is just a thought: What would happen, if we had actors like MIDI chord, MIDI Arpegiator, MIDI Transpose and some others to be used with Isadora actors like 3d Particles?
I am pretty sure, this can be done in Isadora already including some heavy math.
But -since I am working a bit with MIDI and know, what fireworks might start with these MIDI-features, it could very well be worth it, couldn’t it?Best regards, Tom
-
-
Make them first class citizens of the Isadora program
Audio in my opinion should not be included as an parameter in the Movie player / Sound player where you just choose what Channels the Audio track should go, instead I propose that audio becomes a signal that we can route / alter through out the program in future releases of Isadora. (After Izzy 3.1 it opens the posibilities of adding VST plugins or certain audio plugins / allows and enpowers the community to create tools in the plugin section of the website regarding audio)
How I personally would love to see it :
- Audio becomes a signal (just like video is / text / floats / int)
- Audio has to be routed to an output device using an Audio out node. (So we hide the parameters of the Movie player / Sound player, they are only there to play the file)
- Audio in can be a Movie Player / Sound Player / Live Capture device / NDI (with Audio)
- Audio channels can be linked with the Audio Output > Setup window. There you can say that Output 1 for example should be Output 5 on your Sound card
The workflow above already exist in the program for Video, so it is not a new practice that you introduce and makes sense if we compare it to how the stages work for video (Generators / inputs that are sent to a Projector that has a stage connected to it)
Input matters
Video signals in Isadora are coming mainly from Media files / and or the occasional live camera / sensor live feed / etc.
What I propose is that we get an dedicated way of getting the inputs of our sound card inside Isadora. This can be done through the Live Capture settings -
Hi,
Unlike @Juriaan, I am not adverse to integrated audio channels (or audio frequency bands) integrated into the Movie Player. This is a preference for the kind of synaesthetic and generative work that I focus on when using Isadora, but perhaps I am the in the minority here.
I do agree that a module architecture dedicated to audio is going to be a most useful and stimulating development. However, I don’t see an issue with having a diversity of multichannel audio solutions associated with flavours of media input and playback.
Since the discussion is prefaced with a specific roadmap for multichannel audio it is unclear how the more revolutionary ideas suggested by participants in this thread might be considered.
Isadora is not a timeline based interface, but in my mind a waveform over timeline module for audio playback, looping, de-vamping, live channel mapping, live effects routing would provide an invaluable addition to the software.
Best wishes
Russell
-
hi,
perhaps the Sound Player and Movie Player could have an option to display the audio waveform in the progress bar. That way the module could be expanded to see more waveform detail and create more accurate loops etc.
-
very good idea! And timecode control, start, stop, pause, jump, loop function, input and output routing, output devices, sound output of videos for manipulation etc
greetings Matthias
-
Yes! yes! yes!
-
So for all of you passionate about, as Juriaan said it, "making audio a first class citizen": as I said, implementing any kind of plugin structure where audio is routed through the program is a major undertaking. Are all of you saying you'd prefer to not have the new audio routing features (which I believe we can give you in relatively short order) but instead that you'd prefer to wait the until the fall of 2020 to get fully patchable audio added to Isadora?
On macOS adding this kind of funcionality would require far less effort because Core Audio gives gives it to you as what is essentially a built-in feature. I have searched for a open-source ASIO based VST host for Windows that emulates the behavior of what macOS offers -- but I have never come across one. That means we have to build the whole thing from nothing, and then to test it and make sure it is reliable in mission critical situations. This effort would pretty much consume the resources of the company, and aside from bug fixes, I don't think you'd see many other major improvements if we were to take on such a project.
Why do I say fall 2020? I've become far more cautious about estimating how long it takes for us to do something this big. I am guessing it would take roughly four to five months to implement the core features we need. I would then add two to three months of internal testing plus at least two more months of beta testing. This comes out to a development time of approximately 9 to 10 months.
Thoughts?
Best Wishes,
MarkP.S. I am not saying I'm going to take this on. There are a constellation of concerns that will determine what features we add when. I am trying to:
1) hear what you want most, and
2) give you a real-world time frame for accomplishing the features your requesting. -
I prefer to have the new audio routing features as soon as possible and that you keep time to correct bugs or optimise existing actors.
Thanks
best
Jean-François
-
Hi,
Perhaps you could release the OpenNi tracker suite of plugins push for increased sales based on its awesomeness, fill the coffers/get cashed up and outsource the audio module architecture for major release in 12-18 months as ‘Isadora 4 Audio’. All the while in the short term providing a multichannel MoviePlayer for PC.
Sounds like a plan - but ?
-
@mark For me VST is something that can wait. I would really like audio treated the same way as any other signals, so node based routing and connecting outputs to inputs for all audio channels of audio and audio from videos, audio inputs and audio playback, and having audio outputs as an actor. But I understand this is a big task and will need to be cut into chunks and worked on progressively. I am sure you have checked out a all of these opensource projects but Jack, openAL, dr_wav, dr_mp3, dr_flac and stb_vorbis are some useful libraries I have come across , but my requirements for licensing are much less strict than yours.
I am not sure about timing, IMHO I think it is better to wait for a complete interface (if the node based approach is what will slow you down), rather than expand on the current audio routing setup or make another intermediate interface. -
@bonemap said:
Perhaps you could release the OpenNi tracker suite of plugins push for increased sales based on its awesomeness
I think we're quite close to being ready for public beta on those plugins and we will see if it affects sales. But, frankly, no theater designer who does your standard sort of production is probably going care in the slightest about real-time skeleton tracking -- though I know supporting these cameras is going to definitely help us stand out.
Best Wishes,
Mark -
@mark said:
no theater designer who does your standard sort of production is probably going care in the slightest about real-time skeleton tracking
When you say it like that, it sounds like a really dull market you are pushing into. Let’s hope there is an excited and expanding market for your vision. For one thing the power of the OpenNi is not just skeleton tracking there are many awesome techniques possible with the depth sensing capabilities. All kinds of incredible scenographic projection masks are going to be possible with the extended depth range of the new cameras. You just need to own it and get it out there for designers and artists to explore the potential. I really wish you the best for your efforts.
-
Great to hear audio thoughts are on the horizon!
Seconding @jhoepffner that the most important function for me is stable input / output routing.
seconding @Juriaan that being able to treat audio as another signal type -- esp with regards to being able to easily loop audio from a video output back around, and basic analysis.For both of these use cases better support for Dante and NDI would helpful. Dante is becoming my go-to replacement for sound flower, but its not really composer friendly.
seconding @bonemap on the desire for more reliable multi channel discrete audio channels from the movie player.
seconding @michel on the usefulness of 24 bit audio file playback. Most files I get these days are 24 bit, and while its easy to recompress them -- I've twice experienced sound designer demands for an entire separate machine, solely to play a 24 bit aif file because of concerns over the 16 bit reduction...
VST plugins aWhile it would be lots of fun for small projects to be able to use VST plugins inside izzy and do more analysis within going to Max, realistically for major projects it would be hard to get past the need for max and abelton running on a separate machine.
Ian
-
Thank you for reinforcing the points that mean the most to you. For others that may not yet have commented, giving a +1 as Ian just did is useful for me.
But one technical point for all of you considering this to understand:
There is no real difference between adding 'audio' as a signal path and a structure that supports VST plugins. As soon as 'audio is a signal, you'll need an audio version of the Projector actor -- i.e., an actor that pulls the audio data from earlier audio providers like the Movie Player or Sound Player so that it can be sent to the audio output device. Once you have this structure, inserting an actor between those two -- whether it is to do sound frequency analysis or audio effects processing -- is not really a significant effort. One source of complexity comes in when you've multiple instances of this our imaginary "Audio Projector" actor. Consider the situation where one is receiving audio at 48K and another at 44K, which means that you've got to start doing re-sampling and also you've got to manage audio volumes to ensure the output doesn't get overloaded, etc. There is also the issue of channels: what happens when you connect an eight channel audio stream to a two channel one? Do you just drop the extra six? Do you mix the eight channels down to two? But mostly, the biggest issue with these kinds of "audio graphs" -- going from audio source actor to audio effect to to audio output actor -- is that they need to function with super precise timing. An "Audio Projector" actor is going to be asking for blocks of something like 256 samples (a 5 millisecond chunk) at a rate of two hundred times per second. If you are late by even a fraction of a millisecond, the buffer doesn't show up at the right time and you've got a big fat click.
That's my big hesitation on this front; getting such a system to work perfectly, so that not even one of those buffers is missed, is something that will require serious effort and considerable testing.
Best Wishes,
Mark -
@mark I'm not sure if it makes a difference as it seems you evaluated the work involved already. But fixed sample rates per patch seems like a fine limitation, with a pop up explaining why. This could some other issues you mentioned. To dealwith summing mixers there are a few approaches, from a very basic divide amplitude by the number of inputs, to even just letting overloads overload. How does this work now when you have multiple scenes with audio that is full volume and activate them at once, whatever the behaviour you have implemented there would make sense to transfer.
As for patching, single or paired audio streams per connection would answer the what if there was a 6 channel movie and a stereo output, or taking the first tracks of a multi track output is also logical.
Having said that as much as I would like to see this, it does seem like quite a rabbit hole to go down. As well as our proposals, would you be willing to share what you have thought of doing in the intermediate term for audio?
-
Hi all ! First of all, thank all the work and dedication of @mark, his punctual and obsessive work. Thank you!
Secondly, I think we should pay attention to Mark's knowledge and his experience and smell. I have the feeling that insisting on the Asio question may not be the best way, but it's just an appreciation of mine. Perhaps first move forward with what Mark knows that consumes less time to implement, instead of delaying the update for another 2 months.
What I do think would be great first to solve the issue of depth cameras and the tracking skeleton. It is also a highly anticipated announcement and for some time now. And I think it will be another step in the positioning of Isadora among the rest of the software.
A big hug to all !Maxi Wille - Isadora Latin Network. RIL
-
-
Visual/graph based equalizer
the ability for MP3's and compressed audio file types to sit in the audio bin along with .WAV and .AAIF