Audio Features for Isadora: What Do You Want?
-
Dear @Fred, @Juriaan, @RIL, @bonemap @ian @jfg @kirschkematthias @tomthebom @kdobbe @Michel @DusX @Woland @eight @mark_m @anibalzorrilla @knowtheatre @soniccanvas @jhoepffner @Maximortal @deflost @Bootzilla
I have a question that I'd like you all to weigh in on.
If you have a mono sound routed to two or more channels, the function of "pan" is clear.
If you have a stereo sound routed to two channels, the function of pan is also pretty clear: in most software, as you pan left, it reduces the right output volume and does not change the left output volume; if you pan right, it reduces the left output volume does not change the right output volume.
But what does pan mean if you have an 8-channel file routed to 8-outputs? Or a 4-channel file to 8-outputs? Or 8-channel file routed to 4 outputs?
Do you have an expectation of panning with routings like this? Frankly, it doesn't really make sense to me.
Best Wishes,
MarkP.S. this is how the UI is looking today. I will build a version for the team to try (macOS only for the moment). I expect beta testers to have in the next couple of weeks or so.
-
@mark said:
an expectation of panning
Hi,
I found the following quote that sums up my expectation of multi-channel panning.
“Panning gives audio channels their own space in the stereo field. It can be used to eliminate masking by moving sounds out of the way of other sounds so the listener can clearly define them.“
IMO this relates to multichannel audio (not just 2 channel stereo) It takes the assumption that a typical speaker set up will be based on left and right stacks or spatial left, right and centre positions.
so a way to calibrate the degrees in panning a channel are important to build the stereo spread for a typical rig. For installations that have audio ‘zones’ calibrated panning facilitates the separation of channels to deliver to individual spaces.
Best wishes
Russell
-
The other instance where panning is critical is when there is a need to separate stereo pairs that might be embedded in an audio file. An eight channel file might be in the format of eight stereo channels and may then require panning to to isolate the monaural tracks...
-
@bonemap said:
An eight channel file might be in the format of eight stereo channels and may then require panning to to isolate the monaural tracks...
First of all, let's make it clear that AIFF and WAVE files do not themselves support the notion of "stereo" channels. A mono file has one channel, a stereo file has two channels, a quadrophonic file has four channels, etc. Your example above would end up being expressed as 16 individual channels in those file formats.
It is true that QuickTime movies support the notion of a multiple tracks, and each track can have an arbitrary number of channels. (None of the Windows formats support this idea as far as I know.) My proposal for an movie with eight stereo tracks is that we would view them like the AIFF files: as 16 invididual channels. Then you can route anything anywhere you want.
For example, here's eight mono tracks routed down to stereo. Here the panning would be clear because you end up with two outputs.
Or a different routing, where all eight channels are being route to all eight outputs. What does panning mean in this situation?
Best Wishes,
Mark -
I think panning is only useful in 2 speaker setup. Once you got to multichannel output directions go in 3D. I still think the modular way is the most flexible. Suppose you have a background running in 5.1 with 6 discrete WAV channels. But you want to pan around 360 with a live input over it.
Input blocks, mix/routing/panning blocks and output blocks.
-
@mark said:
Your example above would end up being expressed as 16 individual channels in those file formats.
Thanks for correcting that - these posts are not user editable. The intended comment was meant to read four stereo pairs becoming an 8 channel file.
If stereo pairs are going to be irrelevant to the ‘sound player’ then Stereo panning is irrelevant too, I would have thought.
Best wishes
Russell
-
i would think that panning is not a necessary feature to be included inside actors. presumably a panning effect can be achieved by patching something together that combines the matrix with separate level controls for each channel anyway. if the way you have approached sound routing doesn't ever mention Left and Right, it doesn't limit your setup to pan-able stereo.
Ableton Live has a feature where you can assign A or B labels to different tracks and use a crossfader between the 2 (groups) - unassigned tracks are unaffected.
-
@mark panning for more than 2 tracks is pretty irrelevant without some kind of spatial audio engine and an idea of speaker locations. Systems like spat, that allow for that, understand the locations of speakers and use something parallel to ray casting to calculate if a multichannel sound was rotated in a multi speaker environment what would it sound like from each speaker. Without all this extra data this panning is irrelevant. With individual volume controls for each channel sounds can be rebalanced to suit a speaker setup, or re-routed for miss-matched channel mappings, or where the multichannel is used to carry sub-mixes or headphones feeds create sends and sub-mixes. This is a pretty big step forward and when serious audio work in a spatial environment needs to be done then other tools are needed.
-
@mark panning a stereo file also need a -3db on central position. Panning a multichannel audio need something more complex so at least for this first iteration can be left out. Just left to 1 3 5 7 and vice versa can be enough.
-
@maximortal said:
panning a stereo file also need a -3db on central position.
Yes -- the panning uses the -3db "equal power" formulas. There are actually a few panning formulas.... but that one is common.
Best Wishes,
Mark -
@bonemap said:
If stereo pairs are going to be irrelevant to the ‘sound player’ then Stereo panning is irrelevant too, I would have thought.
Well, if you're outputting to a pair of channels, then I would expect panning to work, and it does.
It seems like the general consensus is that this is the only situation I should worry about. If you're outputting to more than two channels, I think the pan input will show as "n/a" to indicate it is not applicable.
Best Wishes,
Mark -
@bonemap said:
these posts are not user editable.
You don't get these two options by clicking on the three dots at the bottom right of your comments?
-
@woland said:
You don't get these two options by clicking on the three dots at the bottom right of your comments?
It's because this thread is in Isadora Annoucements -- I think this has some limitation for the users in terms of editing. We could move the thread to another category and that would probably solve it.
Best Wishes,
Mark -
Audio and Timeline
I know this would be a FUTURE request; but for me one of the most important features missing in Isadora is the concept of timeline and events. Audio and Video are to me obvious ways in which to implement this approach to Izzy.I would love to be able to synchronize multiple events (triggers of numerous media, controllers, etc) in exact relationship to TIME.
If the audio or video had a correlated grid where one could place multiple events, my live performance creations would progress dramatically with less programing time.
If any of you recall Macromedia Director program (long gone) that interface was then absorbed into Flash. This timeline based software is incredibly powerful but does not have the flexibility and programming possibilities that Izzy has. To me, if this were added to Izzy... it would move Izzy into a new category of usability.
My 2 cents worth :)
-
@kdobbe said:
one of the most important features missing in Isadora is the concept of timeline and events.
Isadora is Scene-based and while it can do linear-cueing, it's not timeline-based linear cueing. In turn this allows for greater flexibility and the possibility to do non-linear cueing. "Events" though can be created with Timer actors, Trigger Delay actors, Clock actors, Comparators, etc.
@kdobbe said:
I would love to be able to synchronize multiple events (triggers of numerous media, controllers, etc) in exact relationship to TIME.
You can build your show to run off of time. It's not a graphical timeline interface, but there's the Timecode Comparator (and the afore-mentioned actors).
-
Dear All,
Some of you might be happy to see what I got working in Windows today. ;-)
-
-
@mark in PD is implemented the ambisonic system for sound spatialization, but clearly it is not a priority. I am very glad for your fantastic work, best!
-
@mark I want to generate a few tests that relate to some work I was trying to acheive, here is the first:
https://www.dropbox.com/s/wdrx...
It is a 16 channel audio file, wav format 24 bit, 48k interleaved. The channels have a rising burst of tone and only one channel has sound at a time, it cycles through the channels one at a time a few times. Will this work with the setup you are developing?
-
@fred said:
t is a 16 channel audio file, wav format 24 bit, 48k interleaved. The channels have a rising burst of tone and only one channel has sound at a time, it cycles through the channels one at a time a few times. Will this work with the setup you are developing?
You can find the answer to your quesiton in this video link. ;-) Note that the speed is set to 2x so the whole sequence goes by faster.
Best Wishes,
MarkP.S. To be honest, it didn't work it until I updated the WAVE parser to understand the WAVEFORMATEXTENSIBLE structure used in this file -- but it was a only 10 minute job. I'm glad you send the test file along. ;-)