The difficulty is distinguishing one sound from another. If you always wanted to use a clap, you could simply use the Sound Level Watcher and set the left/right trigger level to a relatively high number, e.g., 75. Then, when you clap, and when the volume exceeds 75%, you'll get a trigger out of the left/right trig output.
You may have some luck with the Sound Frequency Watcher, but it's just more tricky to do. Your friend in this case is the frequency display that shows up in the Status window. Try the following:
1) Choose **Input > Live Capture Settings**
2) Ensure the Sound Input Device popup menu shows the device from which you wish to capture, e.g., "Built In Microphone."
3) Make sure the "Sound Frequency Analysis" checkbox is checked.
4) Click "Start Live Capture"
5) Choose **Windows > Show Status**
6) In the Status Window, you'll see a Frequency display. I suggest setting the range to 5K using the little pop up menu.
Look at the image sound-freq-sine-tone.png which is attatched. That's me whistling, producing a pure sine tone around 1200 Hz. Knowing this, you could use the Sound Frequency Watcher to "listen" for a frequency of 1200 Hz. When that frequency is there, you can trigger it in much the same way you do with the Sound Level Watcher. A flute, which also produces nearly pure sine tones, is one instrument where you can pick out specific frequencies successfully.
Then look at the image sound-freq-shhhhhh.png. That's me making a "shhhhh" sound. Notice that there are no low frequencies -- it's all in the high range, especially around 3500 Hz. You could use a different Sound Frequency Watcher to trigger on that frequency.
Thus, you could tell the difference between a "whistle" and a "shhhhh."
The problem comes with rich sounds, like a violin or the human voice. They have many frequency components, and picking them out is tricky. Look at me singing the syllable "eh" (soft 'e') in the picture sound-freq-eh.png. There's a peak around 200 Hz, another around 1200 Hz, and a third around 2200 Hz. While the first one is the strongest, the other two are most definitely there. That's when it gets hard to distinguish one sound from another.
Hopefully this little primer helps you understand the basic concept, and how you might go about solving your problem.
Don't forget to read the manual on both the Sound Frequency Watcher and the Sound Level Watcher. (The PDF of the manual can be found in the Help menu.)
Seems right... though you seem to be missing the "end tell" statement... but in theory the idea that you 1) open app a with file a, 2) delay, 3) open app b with file b would seem to be the right behavior.
Describe what you'd like to see the Feature Request topic. Then people can vote for it. I want to take advantage of the voting feature to help me know which features are the most critical to the community of users.
To leave a gap in the control numbers as you mention isn't possible in the current version. For now, the only way would be to add some extra controls that use up those control numbers. The do your copy and paste. Then delete the extra controls.
Thanks Michel. I had previously tried using a generator directly connected to the color input of TextDraw (which doesn't really work) but hadn't considered the simple approach of just mapping the output of ColorMaker to TextDraw.
on my setup, i experienced it, little slow, send on 2-10 channels, use about 80 in the scene, the net broadcaster would be less config on other pc´s, the idea, split control and animations. finished it using OSC. good to learn about net broadcaster.
and some tests later:
Netbroadcaster, OSC, are working at the same speed, maybe broadcaster actor caused the trouble build the scene over, used netbroadcaster instead of broadcaster now works perfect.
you can use broadcast/listener actors so every time you activate that scene you also send it the values you need. I'd fade in the scene a little so that you don't see a jump from sprite's local values to the broadcast ones.
I group actors based on what they are doing in the scheme of the whole patch. Groups are generally arranged to have signals flow from top to bottom and left right depending on the shapes of the actors and what makes the most logical sense to me given their respective functions. This all tends to fall apart when my patches get increasingly complex since I add stuff in but don't want to have to rearrange everything that is already there. I tend to have little trouble jumping into even the most complex of my patches a year or two after I made them and remembering what everything is doing so my goal is more to minimize scrolling and speed up programming and editing than it is to fully illustrate the patch's design. All of my user actors are demonstrations of what my layout style looks like if you are especially curious.