High load on RTX Ada, low on Intel Arc
-
Hi all,
It's time for another of those threads of "what's wrong with this computer?"
With this laptop (and its twin, bought at the same time), when using the Nvidia RTX Ada (new name for Quadro) graphics and showing stages, load in Isadora starts higher than expected, and if a show is run for long enough load increases until Isadora stutters out into a full freeze, requiring a computer restart to clear the condition. The interesting part is I've discovered that load is way lower if I force Isadora onto the Intel Arc graphics. No longer term tests on that yet, but levels are low enough that I'm not worried about a crash.
Specs for the laptop are: Dell Mobile Precision 3591, Win 11, Ultra 9 185H, 2TB NVMe SSD, 64GB DDR5, NVIDIA RTX 2000 Ada Generation 8 GB
Just starting from an empty file, on showing stages on the RTX, load stabilizes at about 17%, compared to 1% on Intel. In the show I've been given, at the point where things really take off, load will swing back and forth between ~30% and 70% on RTX, but not go above 4% on Arc. Windows task manager and resource monitor show neither graphics card going above 10%.
I've tried switching out the graphics driver, going with the one the comes down from Windows Update, the latest from Dell, and the latest stable as well as beta from Nvidia.
Other than finding a different computer to test on, any ideas on what I should try next?
-
I presume you've used the NVidia Control panel (or Quadro equivalent) to tell the system to use the Ada card for Isadora?
Are you able to disable the Intel Arc card at all? In the BIOS perhaps? Wondering if that might make a difference... -
Newer Windows ignores the Nvidia control and uses this instead:
https://www.windowslatest.com/...
No problem using that, it's how I've switched between Arc and RTX to see the differences, and Task Manager shows Isadora going to GPU 0 or 1 depending on how it's set.No option in the BIOS to disable either graphics card. I remember seeing that in past models, but double checking against the service manual, that's not a thing in this one.
-
Try ensuring that no displays are connect the outputs that feed directly to the Intel gpu. Only use connections that are direct to the RTX. This should help eliminate transfers between the two.
-
Not an option with a laptop, unfortunately. The ports are those that come with the laptop, and they're shared.
-
My laptop for example has two sets of ports. One on the left connects directly with the NVidia card, while the one on the back of my machine connect thru the intel card.
If you look at the PhysX section of your NVidia control panel, you can see which physical ports connect and how (some laptops allow the re-assigning of these via GPU settings, for instance my machine allows for 3 modes, active power saving that switches between gpus, low power usage which doesn't use the dedicated gpu at all, and only dedicated gpu where the intel gpu isn't used ever.)So the PhysX section of the control panel will indicate if any connected display is running through the intel gpu or not, and will indicate which port facilitates the connection (these only update for me after I make a change and reboot my system). Unfortunately not all PCs offer the same variety of setups.
-
Many adjustments and reboots later, and still no luck. I've found the diagram of connections, and whether I connect to the HDMI or USB-C ports, they all show as connected to the Intel Arc, with only PhysX pointing to the Nvidia RTX2000. I've played with all the performance/quality options I could find in the Nvidia panel, the Intel Arc panel, and Windows power/graphics settings, and all the connections still go to the Arc card.
I tried disabling the Arc card in Device Manager. Windows still knew that the Nvidia card existed, but refused to send anything to it, disabled outputs except for the laptop screen, and thought that everything was showing on a Basic Display Adapter.
Any further ideas? This has me stumped.
-
have you talked to Dell support? In the past I have found them to be knowledgeable when I had issues (not related to yours) with a Precision mobile workstation. -
Not yet, but a good idea after I do some more testing. My latest switch to the Game Ready series of drivers seems to have stabilized load at under 30%, but that's still well above the Arc's 4%. If I have time before vacation I'll compare to a desktop with a GeForce/UHD combo, otherwise more results to come in a few weeks.
-
@kfriedberg said:
30%
It may be that there is some delivery delay going thru the arc 2 the Nvidia gpu. However a heavy patch that maxes the Intel arc only processing may be easily handled using the arc 2 Nvidia path. I would suggest running some heavy test files where the Intel processing maxes out, then see how high that goes on the Nvidia.
-
It certainly took longer than expected to get back to it, but here's some load results from a different computer.
Specs:
Dell Optiplex Tower Plus 7020
Windows 11
Core i7-14700
32 GB RAM
512 GB SSD
Intel UHD Graphics 770
NVIDIA GeForce RTX 4060 8 GBUnder three conditions on each test:
Idle, not showing stages (same result on empty sketch or using an existing show)
Empty sketch, showing stages
Running the show that I was testing on in the first postIn these tests the primary control output was plugged into the RTX, and the stage output was plugged into the given card. Also on these tests, Windows graphics settings to tell Isadora to work on Power Saving (Intel) or High Performance (RTX) had no effect on the results.
Intel idle: 0.2%
Intel show stages: 55%
Intel running show: 34-64%RTX idle: 0.2%
RTX show stages: 33%
RTX running show: 18-37%Here's a different set of results. This time the primary output showing and controlling the Isadora sketch is plugged to the Intel output. The stage output is plugged into the given card. But now the Windows graphics settings are doing something.
Intel Power Saving idle: 0.2%
Intel Power Saving show stages: 1.2%
Intel Power Saving running show: 1.3-3.5%Intel High Performance idle: 0.2%
Intel High Performance show stages: 77%
Intel High Performance running show: 65-94%RTX Power Saving idle: 0.3%
RTX Power Saving show stages: 14%
RTX Power Saving running show: 10-17%RTX High Performance idle: 0.2%
RTX High Performance show stages: 33%
RTX High Performance running show: 22-54%It's tough to say what all that means behind the scenes, other than that the graphics pipeline is complicated. The rules seem to be:
1) Don't mix graphics cards
2) Prefer Intel to Nvidia
3) If you have to mix graphics cards, have the control output on Intel and the stage output on Intel as much as possible, then further stages on Nvidia -
These are logical results. One important point is that if you use two graphics cards (ie a control screen plugged into one and stages plugged into another), the data (including every frame of image or video) must reside on both cards. In Isadora, the control screen has access to the video streams (if you hover over the connection, it provides a real-time preview) that are sent to the stage outputs. Whenever this happens, the data must be copied from one GPU, passed through the CPU, and then copied to the other GPU.
Downloading from a GPU is always very slow and uses a lot of resources, whereas uploading is much quicker because it is a more common accelerated function. Typically, data is calculated on the CPU and then sent a single GPU for rendering—this covers most GPU use cases. In your setup though, every piece of video data needs to be on both cards. This means uploading to one card, performing the render, downloading to the CPU and uploading to the other card. Lots of extra work
Integrated GPUs do not need to be fed via a PCI slot, so the connection can be a bit faster (which explains some of your results). NVLink used to be a way to avoid the overhead of this kind of copying by linking GPUs directly, but it is now only available on professional cards and requires software to be programmed to make use of it. There are some very specific tools in TouchDesigner, under strict limitations, that allow two Quadro cards to take advantage of NVLink. However, this is limited to two Quadro cards and applies only in certain scenarios.
Overall, the integrated GPU in your system is much less powerful. To make the best use of your computer, I suggest using a single GPU unless you know for certain that you can avoid transferring data between them (you cannot with isadora or any other software really), or you know that the cost of moving all the data around will be worth it for some reason. Isadora cannot stop data moving between cards (maybe if the previews and thumbnails were disabled but its very counter intuitive)—almost no media software can do that. Disable the integrated card in the BIOS and only use the GeForce card. Also try run the tests again propoerly, disabling each card in turn to see what each actually do and to see more logical results, free of the overhead from copying data. If you need more outputs from your GeForce, you might consider a Datapath device or a video wall splitter.
In short, this is not a limitation of Isadora—this is simply how computers work. If you look at high-end media servers like Disguise, they typically use a single card with specialised internal splitters to provide multiple outputs.
As an aside, outputting video via ArtNet to LED pixel displays requires downloading the textures from the GPU and then creating network packets on the CPU, which is slow on any system. However, the unified memory in Apple Silicon chips means this is very fast because no download is needed; the GPU and CPU share the same memory space. This also brings massive benefits for uploading data from the CPU to the GPU. For example, in a normal rendering pipeline or a non-GPU-accelerated video codec decode, once the CPU finishes decoding a frame, it is instantly available to the GPU without any extra steps. This is a big advantage over PCIe-connected GPUs when large amounts of data need to move around.
TLDR dont use 2 different grpahics cards at once- its not isadoras fault that this is slow.