@montana said:
I have a couple of thousand word text that I want to pull in and randomly access individual words. I'm trying to figure out how the data array actor works and if that is the best method.
The best method depends on how random you want to be. Do you want a completely random word from anywhere in the file? Do you want to allow repeats? (If you don't want repeats, the Shuffle actor will be helpful.)
Personally, I'd pull in the entire text with a File Reader or Read Text From File actor, then use Javascript to find out how many words are in the file, feed that max number into a Shuffle actor so I could use the Shuffle actor for randomization, then use Javascript again to take the number from the Shuffle actor and use it to select a the corresponding word from the text. If you literally just have a text file and want to be able to hit a trigger and get a random word from it, that's dead easy (if you have the right Javascript User Actors).
File download for all of the Scenes in the screenshots below: select-random-word-2025-01-27-4.0.7.izz
Read Text From File:
File Reader (I prefer Read Text From File because of the greater degree of control it offers, but File Reader is much simpler):
Data Array can work, but it's just a bit trickier since you need to hardcode some logic to deal with the dynamic nature of randomized text (blank text, allowing the same slot to be used twice in a row, etc.)
@montana said:
it seems I should be able to what I'm trying to there but have been running into the basic problem of making an Excel file that will separate the words into individual cells in Excel (did that by doing text into columns/space) but then the file needs to be exported as Tab Delimited, and looking more closely it seems that the final file needs to be in a .txt format? which I'm stymied by as well.
If you go into Excel and do File > Save As > File Format > Specialty Formats, one of the options is a text file that's tab-delimited. Excel and do File > Save As > File Format > Specialty Formats, one of the options is a text file that's tab-delimited.
So perhaps there is a post out there already on how to do this, or if if another actor set-up would work better if I could be directed there?
There might be other posts if you search "random text", "random string", or "random", but the frontrunner actors are File Reader (but you'll need some Javascript), Read Text from File (but you'll need some Javascript), Data Array (a bit trickier, but do-able), and of course there's also the Pythoner actor (but for that you either need to be able to write Python code or be very patient with an machine learning tool like ChatGPT.)
I have a couple of thousand word text that I want to pull in and randomly access individual words. I'm trying to figure out how the data array actor works and if that is the best method. Watched the Guru session, and it seems I should be able to what I'm trying to there but have been running into the basic problem of making an Excel file that will separate the words into individual cells in Excel (did that by doing text into columns/space) but then the file needs to be exported as Tab Delimited, and looking more closely it seems that the final file needs to be in a .txt format? which I'm stymied by as well. So perhaps there is a post out there already on how to do this, or if if another actor set-up would work better if I could be directed there? Thanks much.
@ignitesomerset Sorry, I didn't get a chance to dig into this over the weekend. Here's what I think should be a solid starting point: play-random-question-video-record-answer-2025-01-27-4.0.7.izz
I did preliminary testing but didn't test it extensively. Feel free to let me know if there's anything that seems wildly broken.
The section at the top left of the Scene Editor is a legend that explains the color-coding I used.
Orange comments are used for things you really need to read and understand about the patch in order to use it successfully.
@dbini said:
if the one-button system gets out of sync, you could end up with long recordings of silence and not recording when someone speaks.
I addressed this in my example file.
If you start recording and don't stop the recording within 5 min (300 seconds) the end recording trigger is hit automatically.
Also in my file, if you play a question video, it ends, and then you don't start recording an answer within 90 seconds, the system resets itself back to being ready to play a new question video the next time someone presses the button.
The length of both these timeout functions can be adjusted at the bottom of the Control Panel.
Other things I addressed:
- Used the Text Draw actor to ensure participants know what to do.
- Proposed a naming scheme for the question video files so that when a video file is selected, its name is used to show its question with a Text Draw actor.
- Implemented an automated naming mechanism for the answer files that standardizes their names and ensures that the answer file names include the name of the question video they were answering. (Otherwise, since the order of the question videos is random, you'd have no quick way to tell what question was being answered by any given answer video and would have to go through and watch then rename/sort them one by one.)
- Automatically show Stages and start Live Capture upon entering the Scene.
If you want to go more bare-bones, I tried to make the patch well-annotated enough that it'd be clear what everything does so that it's easier to trim down or build upon as you see fit.
These are logical results. One important point is that if you use two graphics cards (ie a control screen plugged into one and stages plugged into another), the data (including every frame of image or video) must reside on both cards. In Isadora, the control screen has access to the video streams (if you hover over the connection, it provides a real-time preview) that are sent to the stage outputs. Whenever this happens, the data must be copied from one GPU, passed through the CPU, and then copied to the other GPU.
Downloading from a GPU is always very slow and uses a lot of resources, whereas uploading is much quicker because it is a more common accelerated function. Typically, data is calculated on the CPU and then sent a single GPU for rendering—this covers most GPU use cases. In your setup though, every piece of video data needs to be on both cards. This means uploading to one card, performing the render, downloading to the CPU and uploading to the other card. Lots of extra work
Integrated GPUs do not need to be fed via a PCI slot, so the connection can be a bit faster (which explains some of your results). NVLink used to be a way to avoid the overhead of this kind of copying by linking GPUs directly, but it is now only available on professional cards and requires software to be programmed to make use of it. There are some very specific tools in TouchDesigner, under strict limitations, that allow two Quadro cards to take advantage of NVLink. However, this is limited to two Quadro cards and applies only in certain scenarios.
Overall, the integrated GPU in your system is much less powerful. To make the best use of your computer, I suggest using a single GPU unless you know for certain that you can avoid transferring data between them (you cannot with isadora or any other software really), or you know that the cost of moving all the data around will be worth it for some reason. Isadora cannot stop data moving between cards (maybe if the previews and thumbnails were disabled but its very counter intuitive)—almost no media software can do that. Disable the integrated card in the BIOS and only use the GeForce card. Also try run the tests again propoerly, disabling each card in turn to see what each actually do and to see more logical results, free of the overhead from copying data. If you need more outputs from your GeForce, you might consider a Datapath device or a video wall splitter.
In short, this is not a limitation of Isadora—this is simply how computers work. If you look at high-end media servers like Disguise, they typically use a single card with specialised internal splitters to provide multiple outputs.
As an aside, outputting video via ArtNet to LED pixel displays requires downloading the textures from the GPU and then creating network packets on the CPU, which is slow on any system. However, the unified memory in Apple Silicon chips means this is very fast because no download is needed; the GPU and CPU share the same memory space. This also brings massive benefits for uploading data from the CPU to the GPU. For example, in a normal rendering pipeline or a non-GPU-accelerated video codec decode, once the CPU finishes decoding a frame, it is instantly available to the GPU without any extra steps. This is a big advantage over PCIe-connected GPUs when large amounts of data need to move around.
TLDR dont use 2 different grpahics cards at once- its not isadoras fault that this is slow.
It certainly took longer than expected to get back to it, but here's some load results from a different computer.
Specs:
Dell Optiplex Tower Plus 7020
Windows 11
Core i7-14700
32 GB RAM
512 GB SSD
Intel UHD Graphics 770
NVIDIA GeForce RTX 4060 8 GB
Under three conditions on each test:
Idle, not showing stages (same result on empty sketch or using an existing show)
Empty sketch, showing stages
Running the show that I was testing on in the first post
In these tests the primary control output was plugged into the RTX, and the stage output was plugged into the given card. Also on these tests, Windows graphics settings to tell Isadora to work on Power Saving (Intel) or High Performance (RTX) had no effect on the results.
Intel idle: 0.2%
Intel show stages: 55%
Intel running show: 34-64%
RTX idle: 0.2%
RTX show stages: 33%
RTX running show: 18-37%
Here's a different set of results. This time the primary output showing and controlling the Isadora sketch is plugged to the Intel output. The stage output is plugged into the given card. But now the Windows graphics settings are doing something.
Intel Power Saving idle: 0.2%
Intel Power Saving show stages: 1.2%
Intel Power Saving running show: 1.3-3.5%
Intel High Performance idle: 0.2%
Intel High Performance show stages: 77%
Intel High Performance running show: 65-94%
RTX Power Saving idle: 0.3%
RTX Power Saving show stages: 14%
RTX Power Saving running show: 10-17%
RTX High Performance idle: 0.2%
RTX High Performance show stages: 33%
RTX High Performance running show: 22-54%
It's tough to say what all that means behind the scenes, other than that the graphics pipeline is complicated. The rules seem to be:
1) Don't mix graphics cards
2) Prefer Intel to Nvidia
3) If you have to mix graphics cards, have the control output on Intel and the stage output on Intel as much as possible, then further stages on Nvidia
Thanks Hugh for highlighting that: really fascinating.
Future events and recordings of past ones are here:
https://dac.siggraph.org/spark...
hardware:
i've made interactive booths using a midi keyboard built into a wooden box, with big, satisfying wooden buttons that simply press down on one of the keys of the midi keyboard. Use a Control Watcher actor to listen for data from the Midi controller (once you've added your mdi controller in the midi setup menu)
i'm sure these days there are out-of-the-box midi buttons. you could also use a Makey Makey with a Keyboard Watcher.
Maybe it would be useful to have a system with separate Rec and Stop buttons. if the one-button system gets out of sync, you could end up with long recordings of silence and not recording when someone speaks.
Do they press once to start recording and again to stop recording? Yes, that’s exactly how I’d like it to work.
Hardware recommendations: I’d love some suggestions for hardware—anything you think would be reliable and suitable for this setup.
Deadlines: The deadline is pretty open, but the sooner the better!
Prototype offer:
If you have time this weekend to throw together a prototype, that would be amazing and super helpful!
Thank you so much!
How does the recording of the answer stop?
- Is a countdown timer shown so they know how long they have to talk?
- Does sound play or is the Speak Text actor used to indicate that the recording is done?
- Do they hold down the button to record and release it when done
- Do they press once to start recording and again to stop recording?
Another question: Do you already have the button or would you like hardware recommendations?
Final question: What is your deadline?
If I have some spare time this weekend I could spend 45 minutes throwing together a prototype for you to start from (though I don’t want to rob you of the experience if you wanted to do it yourself).