Thank you all for your thoughts. But with regard to Matt H's question:
"what happens if I distribute an actor built in mode C and a less experienced user accidentally switches it to a different mode while editing it or checking out its internals. I foresee an increase in emails about my actors not behaving as expected."
This is not a problem unless the user chooses "Save and Update All," and if they're doing that with someone else's User Actor (especially one as generlized as yours) then they will simply need to accept what happens.
On the other hand, one could "bulletproof" the User Actor inputs and outputs by setting specific Absolute Min/Max (i.e., to something like -100 and +100) so that no other customizations on the outputs would be needed.
I would also love to see the frequency band outputs that exist on the movie player actor on the sound player and the AU sound player.
While we are on the subject, it would be great to have an audio output on the movie player, so I could run the audio on movies through core audio effects.
Please note that I have put a lot of work into the Stage Preview feature for 1.3.0f25. It will be faster and more robust. So please do give it a try again once you I put 1.3.0f25 up for download.
FWIW: I ran your test program on my Retina's integrated HD 4000 (VRAM 512 Mb) -- with the provided texture 6000x4500 it runs fine. I then created a 12000x4500 image and the program crashes with the integrated card, while it still works with the NVIDIA GeForce GT 650M (VRAM 1024 MB)
If my calculation is correct (6000x4500x8 = 216 MB and 12000x4500x8 = 532 MB, assuming 8 bytes per pixel) this behavior is in line with HD 4000 memory rating at 512 MB. If it's 4 bytes per pixel, the 12000x4500 crash would be a surprise, but still could be perhaps explained by the memory overhead a texture requires on the GPU.