XBOX Kinect Lens Holder

Recently idea_beans shared a Thing to print a professional Kinect XBox lense holder. This thingy can be used to increase the resolution of a Kinect/PrimeSense sensor by sticking common 2.5+ reading glass lenses to it.

Unfortunately, the Kinect does not have an auto-calibration feature which means that you won’t be able to scan 360° with this upgrade. However, the PrimseSense Carmine 1.09 is reported (and [un]officially confirmed) to auto-compensate for the lense distortion.

Our ReconstructMe 1.2 using Carmine 1.09 High Detail post has more on that story.

The Coney Island Scan-a-Rama

Fred Kahl, aka Fredini, launched a Kickstarter campaign to fund his new project called the Coney Island Scana-a-Rama. From the project description:

Coney Island Scan-A-Rama is an art project to scan and produce 3D printed portraits of the masses of people who visit America’s playground: Coney Island. Visitors to the portrait studio will come in to get a 3D portrait taken and then a full body 3D figurine of them will be included in a 2014 installation recreating a fully populated model of Coney Island, New York’s Luna Park as it stood 100 years ago!

Fred is using ReconstructMe technology for capturing 3D models of visitors and is probably one of the most experienced users out there. At the time of writing the project still needs some funding, so please back the project if you can.

Job Offer: Software developer

Thanks to the success of many of our projects like ReconstructMe and CANDELOR, we are now looking to expand our team! If you are an ambitious software engineer who knows how to develop and structure complex software and are interested to develop cutting edge technology, apply here (in German):

Software Entwickler für „Components of Vision“ (m/w)

Know someone who might be interested? Please let them know!

ReconstructMe 1.2 using Carmine 1.09 High Detail

MagWeb just dropped us the following high-detail self-scan using ReconstructMe 1.2 and PrimeSense Carmine 1.09. He writes

Just to give some idea of what you can get with the new version and its Carmine 1.09 support: The attached results were done using Carmine 1.09 with glasses. Seems Carmine firmware (or OpenNi 2) performs some self calibration so adding glasses works fine without doing any external calibration

Relases of ReconstructMe 1.2 and ReconstructMe SDK 1.5

We’ve released new versions of ReconstructMe (ReconstructMeQt in previous versions) and ReconstructMe SDK. Were happy to introduce OpenNi 2 with this release. We decided to remove the sensor driver installer option from the installer of ReconstructMe and ReconstructMe SDK, since there is only either the Asus sensor driver necessary, or the Microsoft SDK v1.6.
The new SDK provides a faster and more memory efficient polygonization routine. ReconstructMe generally got faster and easier to handle. Feel free to check out the new version and test it.

New Releases

We are proud to announce the newest release of ReconstructMe SDK and ReconstructMeQt. We’ve put a lot of efforts into both releases in the hope of making 3D reconstruction easier, more robust and versatile. Here is a video of ReconstructMeQt in action!

One of the major changes introduced in ReconstructMe SDK 1.4 is the addition of global tracking based on CANDELOR as reported in an earlier blog post. We find reconstruction much more robust with these algorithms in place. It also allows us to advance into new workflows such as point-and-shoot based reconstruction.

Secondly, we think that you are happy to hear that we have removed the forced tracking-loss limitation in the unlicensed version. The limitation is replaced artificial spheres generated into the output mesh.

Our graphical user fronted ReconstructMeQt 1.1 has received a couple of major usability improvements. You are now able to preview the surface directly, decimate before saving the mesh and choose among different rendering options.

As always, you can read about all changes at the corresponding release logs (ReconstructMe SDK, ReconstructMeQt).

Happy reconstruction!

CANDELOR – Robust Global Tracking

candelorWe are proud to announce that ReconstructMe teamed-up with CANDELOR, our in-house solution for robust and fast object localization, to significantly improve the tracking experience of ReconstructMe.

The integration of CANDELOR allows us to greatly improve the following aspects of ReconstructMe

Recovery of Camera Tracking

Until now, ReconstructMe assumed that the camera movement between subsequent frames is rather small. Violating this constraint threw ReconstructMe off the track and a manual re-location required the user to position the camera very closely to the last recovery position. While this mode works most of the time it can be tedious to find the correct position manually.

With CANDELOR we can weaken this requirement as its algorithms allow us to determine the correct camera movement for large displacements. This is possible as CANDELOR searches for similar features in the 3D data of the recovery position and the current sensor frame. Given a set of corresponding features a transform can be estimated that reflects the searched for camera movement.

The video below shows tracking with CANDELOR enabled and directly compares tracking performance with and without CANDELOR enabled reconstruction.

As one can see in the video recording of data is paused multiple times and the resume position is far off from the paused position. Despite the displacement of the camera, tracking is successfully recovered by CANDELOR.

Automate extrinsic calibration of multiple sensors

A nice benefit of using CANDELOR is that it now has become very easy to use multiple sensors working on the same volume. In traditional multi-sensor applications one requires a good estimate of the so called extrinsic calibration. That is the transformation between two cameras. This extrinsic calibration is often assumed to be fixed and not allowed to change.

ReconstructMe works differently. The initial extrinsic calibration of multiple cameras is automatically calculated by CANDELOR. Once both sensors have registered you can freely move the cameras into different locations. They will maintain calibrated via the data you record. The unique advantage is that multiple sensors can scan more quickly (divide and conquer).

In the following video you can see Christoph and me scanning a person using multiple cameras that work on the same volume.


Point-and-shoot 3D object reconstruction

We’ve added a new reconstruction mode based on global tracking: Point-and-shoot. It means that reconstruction of objects is performed only at specific manually picked locations. Especially for users with low-powered GPU/CPUs will benefit from this mode as it allows you to capture an object with very few positions. The video below shows how it works.

Despite the fact that only few positions are used for data generation, the model looks quite smoothed and closed.

We are confident that the new features will ship with the upcoming SDK and Qt within the next two weeks.

Happy reconstruction!
The ReconstructMe-Team

Merry Christmas and a Happy New Year

Dear All,

the ReconstructMe team wishes everyone a Merry Christmas and a happy new Year! Together, we’ve reached a lot in this year, pushing forward the state of the art in 3D reconstruction. Although not everything we intended to do made it in time, we hope we can catch on that in the new year. Here’s an outlook for 2013 and summary of the past couple of weeks:

Area based Stereo Vision
We’ve teamed up with an Austrian based company that provides real-time dense area based stereo vision systems for arbitrary dimensions. Those systems can be operated in active and passive mode and scaled according to the scanning requirements. The image below shows one of the first scans of a peanut.

Peanut

This is the first evidence ever made that ReconstructMe can scale for arbitrary dimensions. The sensor input dimension is micro meters (0.001 mm).

LEAP
We received note that we have been accepted by the LEAP developer program. From a first glimpse of the SDK it seems like the API does not yet provide the necessary data to perform real-time reconstruction, but we hope that the missing features will be added within the 1st quarter of 2013, so we can start working on integration.

We will be back on January 2nd 2013.

All the best,
Christoph

ReconstructMeQt and Speech Recognition

If you are using ReconstructMeQt on a Windows 7 PC, you will be happy to hear that our application can be controlled by speech commands, essentially freeing you from having your fingers at the keyboard while scanning. Here is how it works.

Most sensors carry single microphones or an entire microphone-array. Using Windows speech recognition one can translate spoken commands to key-press events. We’ve created a speech recognition macro file that maps the following voice commands to keystrokes:

  • ReconstructMe Start – CTRL+P
  • ReconstructMe Stop – CTRL+P
  • ReconstructMe Reset – CTRL+R
  • ReconstructMe Save – CTRL+S

The following PDF file contains has the required instructions for setting this up.[wpdm_file id=76]
The speech recognition macro can be downloaded from the link below.[wpdm_file id=75]

Have fun reconstructing!

Fiere Margriet

Amazing projector project by Mark Florquin uses ReconstructMe for 3D scanning:

We projected the story of ‘Fiere Margriet’ on a small but charming street (Eikstraat), during Leuven in Scène 2012. ‘Fiere Margriet’ (Proud Margriet) is an old legend from Leuven. In short it tells the story of a young lady who gets mugged and killed by a gang of thieves. They dump her into the main river in Leuven, De Dijle. Her body doesn’t sink however, but floats miraculously upstream, surrounded by a magical light.

In their making-of video gives an idea on how ReconstructMe was used to digitalize the womans body in different poses.