"Listen" live music
-
a simple way to listen the notes
If you go to the Capture Settings window you will see a check box to turn on sound frequency analysis. You would then calibrate listening to frequency ranges using the Sound frequency Watcher actor. Alternatively, if your audio is in a movie wrapper you can use the movie player with the interactive setting and activate the hidden audio frequency outputs, these are automatically calibrated by the number of frequency ranges specified.
In my experience, for the effect of listening to individual notes, that amount of detail is only possible with audio input frequency analysis. I have used this technique often with close/pick up/contact microphones on a pre amp and individual acoustic instruments so it will work with careful calibration. If this technique is ‘simple’? It will depend on the quality of noise being analyzed and your patience to set this up and calibrate it. Other than this I don’t know of any other technique in Isadora to isolate individual audio notes as triggers. Except for a midi sound system such as a midi keyboard etc.
Check the Isadora user manual or actor help pane for more information about the sound frequency analysis technique page 445.
Best wishes
Bonemap
-
Personal my experience with Isadora and Audio (frequency bands / note detection) is a pain. Isadora is not an Audio application. If you are on Windows the lack of support is causing headaches (No ASIO, so multiple mics with a dedicated hardware solution is a no-go, No Sound output settings that we can use :( Mac has this, Windows version not.. )
What I do in my daily pratice :
- Do the Sound analysing part in Max MSP. Max MSP is an application that has his roots with audio artist / musicians / etc. So getting the note with something like the CNMAT Externals for Max MSP https://github.com/CNMAT/CNMAT...
- Send the data from Max MSP to Isadora using OSC (udpsend and udpreceive, also get the o. objects for Max if you go down this path..)
If you want an example patch, contact me at hello@juriaan.me
-
@juriaan said:
Mac has this, Windows version not..
Thanks @Juriaan, I should have said my description is for Mac. Even so, it is not a ‘simple’ technique and the variation of sound frequencies and levels in music can make it even more complex to maintain a workable calibration.
However, it can be a very powerful effect that tightly integrates sound and moving image.
Best wishes
Bonemap
-
Really interested in how you actually do it. Getting a note out of Isadora features is something that is quite complex to say the least..
-
@juriaan said:
how you actually do it
Hi @Juriaan,
The way you describe using max appears to be similar.
I isolate the sound of a musical instrument using a contact microphone or very close mic’ing, through a pre amp audio interface. In Isadora I use multiple instances of the Sound Frequency Watcher per audio channel and calibrate each to listen for a narrow band of frequencies to match the individual notes of the scales played by the instrument.
I build this into a user actor with a series of calculators between each instance of the frequency Watcher. That way I can duplicate and move the calibration of the frequency ranges of many Watchers all at once and in series.
This has worked well for instruments like guitars and harps. The thing with an individual note is it is often played as a chord so it is usual for more than one frequency range to be active simultaneously.
I should add that I have not used this technique to replicate anything scientific. I have used it to make generative art, so I have not been too tight with the tolerances and tend to use triggers and parameter constraints based on ranges rather than individual notes. Having said that, I did initially build a prototype for individual notes but only in a single octave. After, experiencing the ‘pain’ of doing that, I decided less accurate ranges was going to be more efficient to achieve.
Best wishes
Bonemap
-
@cristina_spelti is the audio material monophonic or not?
-
Hello,
I was just doing a workshop about Sound Analysis during Werkstatt 2017, here are some of my reflexions.
I agree the best way is to use Max/msp where you have the best tools dedicated to sound analysis and its very easy after that to send information through OSC to use it inside Isadora.
A better way is, possible if the musician/s use midi instrument, is to send directly midi information to Isadora to trigger what you want. It's a pain to make sound from midi to retransforming it to OSC….
But if you want to stay inside Isadora, the last version (I dont know if its available because I use beta version, but 2.6 is coming soon) have a much better sound analysis, particularly a Sound Frequency Bands, working quite well to trigger sound.
Another trick to use sound inside Isadora is to use the routing facilities of your sound card, some (as RME does), are able to reroute output towards input, and there is also the solution to reroute output to input through a numeric cable, ideally ADAT with 8 channels.
Jacques
-
@jhoepffner said:
Sound Frequency Bands,
Thanks Jacques, the list of new and updated features for the 2.6 release will be interesting reading. I am not sure what happened to soundflower on the Mac but internal routing of audio becomes a recurring issue, particularly for keeping the frequency analysis within Isadora. I will definitely explore your suggestions.
@cristina_spelti, your question about a ‘simple’ way, I believe is to use Isadora’s internal sound analysis, unless you also have MaxMSP or Midi Instruments that can be networked with Isadora.
Best wishes
Bonemap
-
Hi all!
@bonemap @crystalhorizon @jhoepffner @Juriaan
Thank you for this discussion. Actually I still have to choose how to set up my work. The source of the sound will be a string orchestra completed by an electronic drums (aFrame drum), a piano, a midi keyboard, live voices, 2 laptops for electronics sounds.
I would like to use only Isadora (his internal sound analysis) and maybe a midi connection with the keybord ... but for this I have to study a lot because I know very little about the use of midi.I'm still confused but this discussion is helping me to understand.
Grazie!
-
This is a good way but take too time. I think that I can't do this alone, without orchestra and I have no time to rehearsing. I have to find something effective but simple
Thanks
all my best!
-
Hi,
Take a look a imitone
-
Interesting software ! Great for kids 😀
-
I think SoundFlower is defunct now. I've heard of people using Sound Syphon ($40 USD) but haven't myself because of the pricetag (I don't need audio routing very often).
Best wishes,
Woland
-
Another possible technique would be to send numerous frequency readings as OSC to wekinator (a computer learning tool).
There you could train Wekinator to recognize triggers... this may prove to be a rather effective way to 'recognize' notes.
Wekinator can then send a OSC message back to Isadora if a trigger/note is recognized.
I haven't tested this... but it should absolutely be possible.
I've done similar with video.
-
-
-
Hi!
Someone can explain to me wich actor is possible to use to listen the music "beat"?
My idea is - in 3d particles - using beat to add obj.
Thanks
Cri
-
Hehehe, that is a trick one. There is no 'Watch for BPM' actor in Izzy.
But let's see or we can come up with something
1. Use an other program (like MAX MSP or PureData (free)) to get the BPM and send it using OSC to Izzy.
2. Calculate the BPM by using the general rule of thumb. 1 BPM = (reading of the Hertz) * 60 (Since it is not beats per second, but beats per minute)
Since we can't use Izzy Frequency watchers on a Sound Player I rather go with 1, since the second option causes a lot of headache that we have to fix with some JavaScript... (And since Izzy doesn't provide an Audio API we can't simple put filters on the values causing a lot of coding to make this possible using math..)
-
if you are using frequency analysis in Isadora, you can quickly use a frequency watcher connected to a 'tap tempo' to get the BPM.
Tap tempo is a very hand little actor to get the BPM or hz of any input.
-