Thank you all for your thoughts. But with regard to Matt H's question:
"what happens if I distribute an actor built in mode C and a less experienced user accidentally switches it to a different mode while editing it or checking out its internals. I foresee an increase in emails about my actors not behaving as expected."
This is not a problem unless the user chooses "Save and Update All," and if they're doing that with someone else's User Actor (especially one as generlized as yours) then they will simply need to accept what happens.
On the other hand, one could "bulletproof" the User Actor inputs and outputs by setting specific Absolute Min/Max (i.e., to something like -100 and +100) so that no other customizations on the outputs would be needed.
FWIW: I ran your test program on my Retina's integrated HD 4000 (VRAM 512 Mb) -- with the provided texture 6000x4500 it runs fine. I then created a 12000x4500 image and the program crashes with the integrated card, while it still works with the NVIDIA GeForce GT 650M (VRAM 1024 MB)
If my calculation is correct (6000x4500x8 = 216 MB and 12000x4500x8 = 532 MB, assuming 8 bytes per pixel) this behavior is in line with HD 4000 memory rating at 512 MB. If it's 4 bytes per pixel, the 12000x4500 crash would be a surprise, but still could be perhaps explained by the memory overhead a texture requires on the GPU.