After weeks of hard work we are proud to announce a new upcoming feature called color tracking. Color tracking incorporates color information into camera motion estimation. This allows ReconstructMe to keep track over planar regions, cylindrical shapes or other primitive shapes. The following video shows some challenging reconstructions that succeed with the help of color tracking.
The new tracking algorithm seamlessly blends geometric and color information together, leading to an overall improved tracking performance in almost all situations. During development we’ve paid attention to robustness and runtime. As far as robustness is concerned, we’ve made sure that fast variations in illumination or camera auto exposure do not affect the tracking performance.
From a developer and user point of view you should be aware of the following points to maximize tracking stability.
- Ensure that the scene you observe is texturally and/or geometrically rich. Although we’ve tuned the algorithm to cope with lack of information in both streams, we need at least some information to be present in the scene.
- Try to get around 25-30 frames per second. Color tracking requires small increments in the transformation of the camera, otherwise it will not converge. Please note that the color tracking does more than geometric tracking alone, so it has a small increased runtime footprint.
- Try to avoid fast camera motions that potentially blur color images.
- Discard the first few camera frames as we have observed cameras to vary exposure vastly in these frames.
- Make sure that the color camera is aligned to depth camera in space and time.
In case tracking fails we’ve also added a recovery strategy that takes color information into account. This global color tracking allows you to recover by bringing the sensor in a position that is close to the recovery position shown as shown in the following video.
Our roadmap forsees that we first release a new end user UI version that supports color tracking in the coming days. This will allow us to have many people test the current state of the algorithm and hence provide us with valuable feedback.
Good day, I am Katherin and I wanted to know in that section of reconstructme, do the configuration to capture the information moving the kinect as in the video, nowadays only I can obtain information with the still and the objects turning around the sensor.