Category Archives: News

ReconstructMe 0.4.0-193 released

Mostly a bug-fix release with the following changes

  • Added support for selecting a specific sensor using the --sensor switch. See usage for details.
  • Fixed Tripcount unknown issue.
  • Fixed Cannot find XML file issue.

Go get it here

http://reconstructme.net/downloads/


In case you want to test the Microsoft Kinect Sensor backend, read this thread

https://groups.google.com/d/topic/reconstructme/gYeH8qq0lb8/discussion

Did you know … ?

that ReconstructMe adapts itself to changing environments? We’ve put a video online to demonstrate the effects.

This video shows how ReconstructMe handles dynamic environments. It smoothly updates its state as changes occur. This allows us to put ReconstructMe to use where adaptiveness is required. Of course, these changes have to occur in a certain subset of the volume at once. You cannot change the content of the entire volume at once, or tracking failure detection will kick in.

Give it a try!

Tech Preview – Automatic Volume Stitching

The following video shows a brand new feature we’ve just added to our main development branch: Automatic Volume Stitching. In this video new volumes are created once the current volume is left. Transformations between volumes are tracked automatically and used to reconstruct a complete surface model. This will allow us to keep the volumes small and the resolution per volume high to create an accurate surface.

Note, that we haven’t applied any post-processing in Meshlab. We might add an Iterative Closest Point algorithm to increase the accuracy at volume seams, but for now, we are pretty happy with the results.

We haven’t decided when to add this feature to the public version, because more testing needs to be done before.

Happy reconstruction,
Christoph

Public Release on February 27th

Start your engines, connect your Kinect’s, oil your swivel chairs: The ReconstructMe team, powered by PROFACTOR, proudly announces:

ReconstructMe will be publicly released on
February 27th, 14:00 CET

On Monday everybody, which probably includes you, will be able to scan and reconstruct the world. ReconstructMe will be free for non-commercial use. Contact us for commercial interests.

Big thanks fly out to our BETA testers that made releasing in time possible! They provided valuable feedback throughout the entire BETA program and without them, we wouldn’t have reached the robustness and usability we have now.

Here’s to our Beta testers!

BETA Phase 3

The final phase of BETA has just started! In case you haven’t received E-Mails, leave a reply here.

We are looking forward to your feedback and the public BETA to be released in March. We wish all participants a happy reconstruction!

Best,
Christoph

Tracking Failure Detection and Recovery

Here are few words on a new feature we’ve added these days to our main development trunk: Tracking failure detection and recovery. Using that feature the system is capable of detecting various tracking failures and recover from that. Tracking failures occur, for example, when the user points the sensor in a direction that is not being covered by the volume currently reconstructed, or when the sensor is accelerated too fast. Once a tracking error is detected, the system switched to a safety position and requires the user to roughly align viewpoints. It automatically continues from the safety position when the viewpoints are aligned.

Here’s a short video demonstrating the feature at work.

We are considering to integrate this feature into the final beta phase, since stability of the system and its usability increase. Be warned, however there are still cases when the systems fails to track and fails to detect that the tracking was lost, causing the reconstruction to become messy.

ReconstructMe Team Replicated

Here’s a list of 3D replications printed by our keen beta testers. Sources are 360° depth streams of team members we posted here. In case you are about to print one of our team members, let me know, so I can update this post.

Derek, one of our enthusiastic users, has taken the time to replicate Martin’s stream using a 3D printer device. There is a write up about the making of on Derek’s blog. Check it out, it’s worth reading!

Here’s is his video capturing the printing sequence

Thanks a lot Derek!

Tony just dropped us a note that he successfully printed Christoph and uploaded the the results to thingiverse. Here is an image of the result

Thanks a lot Tony!

Bruce (3D Printing Systems) replicated all three of. His setup reminds me of bit of Mount Rushmore. Here’s the image

Thanks a lot Bruce!

3D Segmentation using ReconstructMe

Our (PROFACTOR) interest in ReconstructMe goes beyond reconstructing 3D models from raw sensor data. The video below shows how to use ReconstructMe to do a stable foreground/background segmentation in 3D. A technique that is often required in 3D vision for pre-processing purposes.

With ReconstructMe you can generate background knowledge by moving the sensor around and thus building a 3D model of the environment. Once you switch from build to track only, ReconstructMe can robustely detect foreground by comparing the environment with the current frame.

This technique can be used for various applications

  • Monitoring robotic workcells for human safety aspects.
  • Intelligent reconstruction of process relevant objects only. We definitely do a video on this one.

to name a few.

Three 360 Degree Upper Body CAD Models Reconstructed

We have just recorded three of our colleagues and created STL models from them. This time we made a full model of ourselfs by rotating around the camera. One guy was sitting on a chair and rotated around, while the other one moved the Kinect up and down so we could get a full model of the front, back, and also the top. If you own a 3D printer or 3D printing software, we would very much like to know if these models are good enough for 3D printing! Please post any comments here. For everyone who made it on the BETA program, there is also the datastream so you can create the STL model yourself. We have post-processed the STL with Meshlab by re-normalising the normals, and converted them to binary STL.
Continue reading