Thank you for your replies!
Here’s an example I tried to solve my problem (making a realtime depth map in Isadora 3D space):
Image 1.jpeg, a 3D object with lighting on
Image 2.jpeg, the same 3D object with a different tilt
Image 3.jpeg, these both 3D objects combined
Image 4.jpeg, both 3D objects combined, lighting off
Image 5.jpeg, with 3D Particles actor constructed a series of vertical rectangles traversing in z-direction, each rectangle black with 4% intensity. This creates a kind of solid cube with “black fog” in front of the 3D objects, and when depth test is enabled, it creates a “depth map” of the objects in 3D space.
However, this is consuming a lot of qpu power and is not “stable” – if changes are made, for instance, to the tilting of the 3D objects, the transparent black rectangles are not delivered evenly => the “depth map” looks uneven and banded. See the following image 6.jpeg
You can see lots of "banding" also in the darker areas, if you look carefully.
So, I was wondering if there would be a better way (and less gpu consuming) to do this? Any ideas?
Thanks!