[ANSWERED] Thinking about NDI and IP Video (CPU vs GPU video)
-
Hi, In the future when NDI and IP video become the norm, will the graphics card still do the actor processing, or does it use CPU?
-
I imagine we will want to continue doing the processing on the gpu, it is simply much faster for parallel processing than a CPU will be. Systems are likely to change in other ways to allow for faster transfer of video frames between the gpu and cpu. This is already happening with the M1 processor in new Macs.
-
@craigw said:
Hi, In the future when NDI and IP video become the norm, will the graphics card still do the actor processing, or does it use CPU?
In the end, the data from NDI or a movie or even a webcam always starts its life as bitmap on the CPU. (Well, maybe not always, but I don't know of any examples to the contrary.) Then you need to get that bitmap image uploaded from the CPU to the GPU so that you can do fast processing on it. This is all quite fast, because GPUs are designed to let you perform CPU to GPU transfers really fast, using things like direct memory access and other special techniques.
The harder part is getting the GPU image back into the CPU. This is always inherently inefficient, because GPUs (which are arguably designed for gaming, not for video processing) are not designed to make these transfers fast, because in a game, you never need to do it. Sending the GPU image out to NDI requires a transfer from GPU to CPU, not to mention costly compression algorithms to prepare the image to be sent over the internet. Because of this, such output will always be a kind of "big ticket item" when it comes to performance. Especially if you start doing it with several high-resolution images.
As @DusX points out, the memory on an M1 is "integrated", meaning that the distinction between CPU memory and GPU memory would no longer seem to exist. I am awaiting my M1 laptop, and finding out if this is true in practice is something I am extremely curious to try.
Best Wishes,
Mark