suggestions for object tracking
-
Hello Izzimunity,
Id like to use the power of crowd knowledge and know, some of you have already experimented with tracking things.
I know about the possibilities regarding IR tracking. But I thought, there might be other new devices which could track movements. Like maybe accelerometer based.I'd like to track a large screen (12m x 3m) for a constant projection including spin. E.g. I would need to tag at least 4 corners.
Any suggestions or stories?
Thank you
Dill -
-
What is moving the screen? If it can also move further away from the projector you need to get distance as well.
With no expense limitations I would say use a mocap system - opitrack can do this easily, but its an intense setup and expensive (you would make a few markers and stick them to the frame).
If the screen is moved by some kind of motorised system tracking the movements of the motors could be enough.
Accurate accelerometer tracking is difficult, if a central point of the screen is always fixed (so it rotates and angles from the centre of the screen, then this will work ok. In theory in this scenario a single 6 axis accelerometer will give you all the information you need.
This is a fair bit easier with marker tracking - I dont know what the exact setup is but if it is possible I would look at using Aruco markers
If the screen is mostly facing the audience and there is enough space then these could be printed large and applied to the back of the screen. If you need invisible tracking you can also cut them out of some IR reflective material. You can use a few markers on the back of a screen and a camera from the back of the stage. As long as it gets one whole marker with reasonable resolution (ie not tiny in a low res camera) it can give you an accurate distance and rotation.
Once you have the positional and rotation data from a single marker you can then use homography to calculate the rotation and scaling you need to project onto the surface. You will need to perform some kind of calibration to get this to work accurately but this just means identifying a series of points on the physical screen and moving a series of points in video to match them while they are being passed through the rotation and scaling decoded from the marker. There is a lot of moving parts but I have used homography before with this kind of calibration routine with surprisingly good results.
All this is built into openCV which you can now use through pythoner in Isadora 4. It will take quite some work but it can work.
Here is some information
And here is a video where I used the SolvePNP to calculate a projection plane between images on a LED screen being analysed for contours in real time with openCV and then fed to a laser that projectected them back onto the LED screen. The homography provided the translation between the physical screen, the physical position of the laser and the drawing space inside my code.
-
Thank you @fred, this is some extensive base!
I'm not sure about your "Accurate accelerometer tracking is difficult, if a central point of the screen is always fixed [...] ". Are your referring to a single centered accelerometer/gyroscope? I thought to use at least 4 of them, one for each corner defining a base position for each and adapt the movement from there.
-
Accelerometers give you rotational angles and acceleration along the vector. Even with 4 it will be very hard to get good tracking over space - angles of rotation will be great so if the object is attached to a singular point then this would work really well. Movement will be difficult.
If the screen moves freely accelerometers will be much less accurate and hard to combine than other methods that track from a fixed point (mocap or camera with marker). They measure acceleration and are often paired with gyroscopes. The combination can give you some useful data but even with a lot it will drift pretty fast. Some of this is from the hardware and interference and some of this is from the maths you need to do.
Things like skeleton suits that use accelerometers gain a lot of accuracy from inverse kinematics (IK). They are attached to a skeleton with defined angular relationships. The skeleton actually makes the maths harder but more reliable. You have some limits as to how things can move and with walking you get a reliable ground plane confirmation. This means that due to the fixed angular relationship between points multiple accelerometers can be used to confirm or reinforce the tracking. Even then the most common issues you see are drift. There have been a lot of accelerometer based mocap suits come and go over time. They are getting better but drift is something you see in all the early versions. Then after some r and d time eventually they solve the planar shift that lets you track people as they climb.
Without the skeleton (so tracking a single rigid object) you lose the IK solver that helps stabilise the movement model. With a single rigid body technically all your accelerometers will give you the same reading as it’s a single rigid body they are all attached to. Having more might help get rid of some interference but cannot add accuracy they way it does with IK.
Here is a reasonable article on the issues with this kind of tracking https://www.embedded.com/achieving-accurate-motion-tracking-in-consumer-portables.
If this worked well there would be a lot of cheap object tracking devices on the market. Although it is kind of possible there afaik no products that do this as it kind of doesn’t work. What you want to do is AR tracking and re-projection. Take a look at AR tracking methods. They almost all use cameras and different kinds of tracking.Another thought is that you could stick and iPhone to the screen and try write an app that uses apples ARKit to output position and rotation data it will still drift and have issues though as you need to correlate the moving objects relative position with the projectors fixed position as well as its throw and lens distortion at different distances
This would be a cool problem to apply machine learning to. If you had great synchronised recordings of accelerometer movements and mocap data you could make a model that would likely do much better at rigid body tracking with accelerometers
Fred