
...and i like the promise of this solution, and I await the outcome with interest...

many thanks. I like the immediacy of this solution, and will definitely try it...

@notdoc said:
generate text directly from live speech in real time into Izzy
This is possible with Pythoner. I have started work on a script for this feature, but haven't had a chance to finish it yet. I got side tracked with a module used for selecting the audio device..
I'll get back to this soon.

a couple of years ago i used https://www.speechtexter.com/ in a performance situation. When it was possible to delay the appearance of the text, I would copypaste the text into Isadora, but for realtime generation of text, I began using the browser window as a video source for the Screen Capture actor.

general question for the community:
I'm looking for a way to generate text directly from live speech in real time into Izzy and use it to place text onscreen using the Text Draw Actor, and I would welcome any suggestions you have or relevant examples. It's a no-budget project at the moment.
thanks
ND

The input on the Broadcaster is mutable, meaning it will change to match the data type and range (if any) of the first output to which it is connected. Sounds to me like you've connected the Broadcaster actor to an actor that has caused it to adopt the range of the other actor like this:
Figure 1: Range of Wave Generator 'value' output property is min:0, max:100
Figure 2: Range for fresh Broadcaster actor's 'value' input is min:MIN, max:MAX
Figure 3: Once connected to the Wave Generator, the range of the Broadcaster actor's 'value' input mutates to match the range of the Wave Generator; min:0, max:100