@nic said:
A camera pointing down from the rig seemed a better option.
Hi Nic,
This is a great topic and it is one that comes up a bit. It is also interesting for me because it prompts me to reflect on all of the tracking projects I have attempted and where it might lead next. The top down approach has its nuances too. So many variables to consider in terms of using a camera that is dependent on reflected light and lens ratio. So assuming you have a wide angle lens on a camera mounted in the grid, or an array of cameras as Dusx suggests, and have a consistent lighting state to capture a suitable image to provide clear tracking ‘blobs’. Your question then about the deficiencies of the Eyes++ is going to be contingent on all of the hardware camera lens configuration and rigging situation that you have or intend to set up.
You can simulate the optimum blob tracking image to then test the Eyes++ module against the desired tracking outcome for your project. Following is an example of a simulation I put together last year to test and calibrate the Eyes++ : blob tracking screen capture
The critical thing for me in doing the simulation, was the calibration of the Eyes++ in terms of blob occlusion and persistence of tracking individual ‘blobs’ as they came together or crossed paths.
I came to some conclusions based on running the simulation. Firstly, while I could control the simulation image, could I be confident of capturing an image of the people/performers moving around within the range of the camera? This question then becomes about consistency of the lighting. For example, creating a static lighting state that allows the Eyes++ to be finely calibrated for the blob tracking to work at its optimum in terms of occlusion and persistence. In the past I played with Infrared lighting ‘hacks’ - those that use an infrared filter over the camera lens and dedicated infrared lights. I then realised that a dedicated thermal infrared camera (like a FLIR) could simplify this by cutting out the need to use infrared light instruments. And this is the system I use currently for blob tracking with Eyes++. It means my tracking system is limited to a fixed lens that gives approx. 6 x 4 mtr tracking area from a 6mtr grid rig mount.
I found that the Eyes++ requires fine and nuanced calibration and that it worked better to have all of the blobs assigned so that blob persistence and blob instances could begin to find stability. There may be some further investigation to do.
In terms of using an RGB camera for tracking, another option is to consider tracking by color, although this brings its own nuances and compromises.
Best Wishes
Russell