MagWeb just dropped us the following high-detail self-scan using ReconstructMe 1.2 and PrimeSense Carmine 1.09. He writes
Just to give some idea of what you can get with the new version and its Carmine 1.09 support: The attached results were done using Carmine 1.09 with glasses. Seems Carmine firmware (or OpenNi 2) performs some self calibration so adding glasses works fine without doing any external calibration
We’ve released new versions of ReconstructMe (ReconstructMeQt in previous versions) and ReconstructMe SDK. Were happy to introduce OpenNi 2 with this release. We decided to remove the sensor driver installer option from the installer of ReconstructMe and ReconstructMe SDK, since there is only either the Asus sensor driver necessary, or the Microsoft SDK v1.6.
The new SDK provides a faster and more memory efficient polygonization routine. ReconstructMe generally got faster and easier to handle. Feel free to check out the new version and test it.
We are proud to announce the newest release of ReconstructMe SDK and ReconstructMeQt. We’ve put a lot of efforts into both releases in the hope of making 3D reconstruction easier, more robust and versatile. Here is a video of ReconstructMeQt in action!
One of the major changes introduced in ReconstructMe SDK 1.4 is the addition of global tracking based on CANDELOR as reported in an earlier blog post. We find reconstruction much more robust with these algorithms in place. It also allows us to advance into new workflows such as point-and-shoot based reconstruction.
Secondly, we think that you are happy to hear that we have removed the forced tracking-loss limitation in the unlicensed version. The limitation is replaced artificial spheres generated into the output mesh.
Our graphical user fronted ReconstructMeQt 1.1 has received a couple of major usability improvements. You are now able to preview the surface directly, decimate before saving the mesh and choose among different rendering options.
Until now, ReconstructMe assumed that the camera movement between subsequent frames is rather small. Violating this constraint threw ReconstructMe off the track and a manual re-location required the user to position the camera very closely to the last recovery position. While this mode works most of the time it can be tedious to find the correct position manually.
With CANDELOR we can weaken this requirement as its algorithms allow us to determine the correct camera movement for large displacements. This is possible as CANDELOR searches for similar features in the 3D data of the recovery position and the current sensor frame. Given a set of corresponding features a transform can be estimated that reflects the searched for camera movement.
The video below shows tracking with CANDELOR enabled and directly compares tracking performance with and without CANDELOR enabled reconstruction.
As one can see in the video recording of data is paused multiple times and the resume position is far off from the paused position. Despite the displacement of the camera, tracking is successfully recovered by CANDELOR.
Automate extrinsic calibration of multiple sensors
A nice benefit of using CANDELOR is that it now has become very easy to use multiple sensors working on the same volume. In traditional multi-sensor applications one requires a good estimate of the so called extrinsic calibration. That is the transformation between two cameras. This extrinsic calibration is often assumed to be fixed and not allowed to change.
ReconstructMe works differently. The initial extrinsic calibration of multiple cameras is automatically calculated by CANDELOR. Once both sensors have registered you can freely move the cameras into different locations. They will maintain calibrated via the data you record. The unique advantage is that multiple sensors can scan more quickly (divide and conquer).
In the following video you can see Christoph and me scanning a person using multiple cameras that work on the same volume.
Point-and-shoot 3D object reconstruction
We’ve added a new reconstruction mode based on global tracking: Point-and-shoot. It means that reconstruction of objects is performed only at specific manually picked locations. Especially for users with low-powered GPU/CPUs will benefit from this mode as it allows you to capture an object with very few positions. The video below shows how it works.
Despite the fact that only few positions are used for data generation, the model looks quite smoothed and closed.
We are confident that the new features will ship with the upcoming SDK and Qt within the next two weeks.
the ReconstructMe team wishes everyone a Merry Christmas and a happy new Year! Together, we’ve reached a lot in this year, pushing forward the state of the art in 3D reconstruction. Although not everything we intended to do made it in time, we hope we can catch on that in the new year. Here’s an outlook for 2013 and summary of the past couple of weeks:
Area based Stereo Vision
We’ve teamed up with an Austrian based company that provides real-time dense area based stereo vision systems for arbitrary dimensions. Those systems can be operated in active and passive mode and scaled according to the scanning requirements. The image below shows one of the first scans of a peanut.
This is the first evidence ever made that ReconstructMe can scale for arbitrary dimensions. The sensor input dimension is micro meters (0.001 mm).
We received note that we have been accepted by the LEAP developer program. From a first glimpse of the SDK it seems like the API does not yet provide the necessary data to perform real-time reconstruction, but we hope that the missing features will be added within the 1st quarter of 2013, so we can start working on integration.
ReconstructMe a versatile tool. Besides scanning people for fun, ReconstructMe has been put into action in industrial applications. The video below shows how to digitalize existing machinery and pipes to generate complete 3D models.
If you are using ReconstructMeQt on a Windows 7 PC, you will be happy to hear that our application can be controlled by speech commands, essentially freeing you from having your fingers at the keyboard while scanning. Here is how it works.
Most sensors carry single microphones or an entire microphone-array. Using Windows speech recognition one can translate spoken commands to key-press events. We’ve created a speech recognition macro file that maps the following voice commands to keystrokes:
ReconstructMe Start – CTRL+P
ReconstructMe Stop – CTRL+P
ReconstructMe Reset – CTRL+R
ReconstructMe Save – CTRL+S
The following PDF file contains has the required instructions for setting this up.
We’ve just released ReconstructMe SDK 1.3 bringing a huge list of improvements. Especially performance for older graphics cards and high resolution volumes has increased, support for tilt motors was added and the scanning time in non-commercial mode is now significantly longer. This is just the tip of the iceberg. For an in-depth log of changes visit the release page.
Amazing projector project by Mark Florquin uses ReconstructMe for 3D scanning:
We projected the story of ‘Fiere Margriet’ on a small but charming street (Eikstraat), during Leuven in Scène 2012. ‘Fiere Margriet’ (Proud Margriet) is an old legend from Leuven. In short it tells the story of a young lady who gets mugged and killed by a gang of thieves. They dump her into the main river in Leuven, De Dijle. Her body doesn’t sink however, but floats miraculously upstream, surrounded by a magical light.
In their making-of video gives an idea on how ReconstructMe was used to digitalize the womans body in different poses.
We’ve just released ReconstructMe SDK 1.1 which brings you improved image support, calibration support as well as an increased scanning time for non-commercial projects. This release also greatly improves the documentation of the API with additional examples and sections on critical API elements.