ReconstructMe 2.5.1034 brings updated Sensor Config and improved User Interface

We are proud to announce ReconstructMe v2.5.1034. This updated simplifies and improves the configuration of your sensor. Either select a supplied configuration, tailor-made for every supported sensor, or write and tweak your own configuration as before.

sensor_selection

Additionally we made improvements to the user interface as we refactored parts of the ui code – most important ReconstructMe is now dpi aware and can be used out of the box on high resolution displays.

We also improved the rendering code, resulting in less overhead and more efficient usage of the GPU.

Kicking off Camera Review Series

reviewsicon

As you know ReconstructMe already supports a variety of commodity 3D cameras and we are working hard on integrating new and exotic ones as soon as we take notice of them. We felt it is about time to put details into perspective. Therefore we are kicking off a camera review series to cover sensor specifications, installation instructions and more.

Starting with the Intel RealSense R200 review, we plan to have an in-depth review of each sensor supported in ReconstructMe. The list of supported sensors and reviews can be found on our supported sensors page.

Free ReconstructMe 2.4.1016 released

On behalf of the ReconstructMe team, I’m proud to announce ReconstructMe v2.4.1016. This update improves support for the following sensors

  • Intel RealSense F200
  • Intel RealSense R200

You can grab the latest version from our download page. We are releasing this version free of charge for non-commercial projects as announced recently.

Usage

To use Intel RealSense cameras on your computer you will need to install Intel RealSense camera drivers and use the correct ReconstructMe sensor configuration files. For your convenience, you can download both from below.

Once you have installed the necessary components, open ReconstructMe and set the path to the configuration file as shown in the screenshot below.

HowToSetRealsense

Troubleshooting

Please note that Intel recommends connecting the sensors to a dedicated USB3 port directly. Avoid using hubs or extension cables. When your sensor does not respond for longer period of time, restarting the Intel depth camera services might help. You can easily find these services in local services management console as shown below

RestartRealSenseService

If You Love Something, Set It Free

gifts-570821_1920


From now on, ReconstructMe – our user interface for digitzing the world in 3d – is available to everyone for free!

We offer ReconstructMe free of cost and without limitations for private and non-commercial projects. This means you can download ReconstructMe and use it for everything from scanning for 3d printing, architecture, documentation and animation. For commercial purposes we continue offer royality fee based licenses of ReconstructMe and ReconstructMe SDK.

Head over to the download area and grab the latest version in order to set it free. If you already have ReconstructMe licensed, but your license expired, then re-open ReconstructMe and it will run in non-commercial mode instead of unlicensed mode.

Add+it 2015: Symposium of Additive Manufacturing and innovative Technologies

Use the opportunity to discuss with numerous international experts from 10 countries and 3 continents, both from science and industry, what 3D printing technologies offer today and what they can be expected to offer in the future.

20150806_Addit_Programm_WS_1

What is special about Add+it 2015?

Workshops provide the opportunity to interact with participants and experts; discuss relevant 3D printing issues and initiate possible further business cooperations.

The Add+it 2015 is organized by PROFACTOR and IPPE, the Institute of Polymer Product Engineering at the Johannes Kepler University Linz.

Registration and Fees

  • Further information on venue, programme and registration is available on the conference website.
  • The registration form should preferably be completed online. Deadline for registration: August 20, 2015

Early registration with a discount of € 30,- is extented until August 10, 2015!

ReconstructMe 2.4 brings color tracking

As promoted in our previous post we added a new color tracking feature to the SDK and promised to release a new UI frontend version supporting it. Today it is my pleasure on behalf of the ReconstructMe team to announce this new fronted release.

In the video below you can see ReconstructMe UI in action. Both scenes are tracked mainly due to color information, as the geometric information alone (planar shape in first scene and cylindrical shape in second scene) do not suffice to estimate the camera position accurately.



Color tracking is currently enabled for all sensors that support a RGB color stream. Algorithm settings are chosen automatically, so you don’t have to configure anything. In case your sensor does not support RGB the algorithm gently falls back to geometric tracking only. Note that scanning colorized is not a requirement for the color tracking algorithm to work properly.

Here are some tips for best results

  • Ensure that the scene you observe is texturally and/or geometrically rich. Although we’ve tuned the algorithm to cope with lack of information in both streams, we need at least some information to be present in the scene.
  • Try to get around 25-30 frames per second. Color tracking requires small increments in the transformation of the camera, otherwise it will not converge. Please note that the color tracking does more than geometric tracking alone, so it has a small increased runtime footprint.
  • Try to avoid fast camera motions that potentially blur color images.
  • Try to avoid reflective materials. Although a reflection appears as texture, it visually changes when moving the camera.

ReconstructMe SDK – Color Tracking Announcement

After weeks of hard work we are proud to announce a new upcoming feature called color tracking. Color tracking incorporates color information into camera motion estimation. This allows ReconstructMe to keep track over planar regions, cylindrical shapes or other primitive shapes. The following video shows some challenging reconstructions that succeed with the help of color tracking.



The new tracking algorithm seamlessly blends geometric and color information together, leading to an overall improved tracking performance in almost all situations. During development we’ve paid attention to robustness and runtime. As far as robustness is concerned, we’ve made sure that fast variations in illumination or camera auto exposure do not affect the tracking performance.

From a developer and user point of view you should be aware of the following points to maximize tracking stability.

  • Ensure that the scene you observe is texturally and/or geometrically rich. Although we’ve tuned the algorithm to cope with lack of information in both streams, we need at least some information to be present in the scene.
  • Try to get around 25-30 frames per second. Color tracking requires small increments in the transformation of the camera, otherwise it will not converge. Please note that the color tracking does more than geometric tracking alone, so it has a small increased runtime footprint.
  • Try to avoid fast camera motions that potentially blur color images.
  • Discard the first few camera frames as we have observed cameras to vary exposure vastly in these frames.
  • Make sure that the color camera is aligned to depth camera in space and time.

In case tracking fails we’ve also added a recovery strategy that takes color information into account. This global color tracking allows you to recover by bringing the sensor in a position that is close to the recovery position shown as shown in the following video.



Our roadmap forsees that we first release a new end user UI version that supports color tracking in the coming days. This will allow us to have many people test the current state of the algorithm and hence provide us with valuable feedback.

ReconstructMe 2.3.958 with Intel HD support

We continue to update ReconstructMe and are happy to announce our newest release supporting Intel HD 4000/4600 and Intel CPU Core i5/i7.

In case you intend to run ReconstructMe on a Intel HD graphics card, please update the graphics driver. If you favor running ReconstructMe on your Intel CPU install the latest OpenCL runtime.

Have fun reconstructing and let us know what you think!

ReconstructMe 2.3.954 released

We have just released new version of ReconstructMe. This is a bug-fix release that resolves immediate tracking lost issues on NVIDIA cards: our users reported immediate tracking-lost issues when starting a scan. The issue seems to occur on NVIDIA cards, with preference on the following models: GTX750, GTX970, GTX960, GTX840M, GTX850M. In case you are affected, please try to run the latest version.

ReconstructMe 2.3.952 released

We are happy to announce the release of ReconstructMe 2.3.952 today. The latest version can be downloaded here.

I’d like to briefly introduce the new SDK / UI features here and bring in-depth information in upcoming blog posts. The SDK / UI now supports the Intel RealSense F200 camera and we’ve reworked sensor positioning API to allow a more fine-grained control over the scan start position of the sensor with respect to the volume.

The UI now supports a rich set of sensor position options which include positioning the sensor based on a special marker seen by the sensor. This feature allows you to easily position the volume in world-space. The following video shows a turn-table reconstruction of a toy horse using marker positioning and the Intel RealSense F200 camera.



If you would like to try out the new Intel RealSense F200 camera, please download this sensor configuration file. You will need to specify the path to this file in the UI at Device / Sensor.

In case you want to give the marker positioning a try, please download this marker image, print it and measure the printed size in millimeters. Make sure to leave a big white border around the marker. You will need to set the correct marker size in the UI at Volume / Volume Position. We usually print the marker with size 90 millimeters. When using marker positioning, make sure the sensor captures the entire marker.

When you notice that the sensor position starts to vary as you move the marker, you know that ReconstructMe has detected the marker. Once ReconstructMe found the marker, you can use the Offset slider to adjust where the volume starts.

Enjoy and let us know what you think.

ReconstructMe selfies displayed on 3D screen

by Stefan Speiser

Hello everyone, my name is Stefan Speiser. I am a graduated Bachelor Student from the University of Applied Sciences (UAS) Technikum Wien in Vienna, Austria. I will present today my bachelor thesis in which I have worked with ReconstructMe.

The goal of the thesis was to create a booth for trade fairs and open days at the UAS Technikum Wien. At that booth a 3D-Scan of any willing visitor will be created, modified and then shown on a 3D-Monitor, so that the visitor can view him/herself in 3D. To reduce the needed time for the whole scanning-, modifying- and output process, a script capable of automating mentioned processes was created.
The booth was made as a marketing strategy of the UAS, in order to attract even more students than before by demonstrating how interesting technology can be.

This picture shows an early development stage of the booth with a functioning version of the script and all programs working as they should. On the left you can see the 3D-Monitor, next to it the control monitor where ReconstructMe runs and on the right is the Kinect System. Just barely visible on the bottom is a rotating chair.

Early development stage of the booth

Early development stage of the booth

To explain how I achieved this result, I would like to first write about the used hard- and software and afterwards explain the automationscript in detail.

Microsoft Kinect

To get the visual information needed for the 3D-Scan a Microsoft Kinect System was used. The person to be scanned sits in front of the Kinect System on a rotating chair. The built-in infrared projector emits a pattern of dots which covers the person standing in front of the Kinect Sensor and the rest of the room. These dots get recorded by the infrared camera and the Kinect System can calculate a depth image with this information.

An RGB camera recording at a resolution of 640×480 pixel and a frame rate of 30Hz grabs the color information of the scene in front of the Kinect System.

IR-Pattern from infrared projector (Source: MSDN, 2011)

IR-Pattern from infrared projector (Source: MSDN, 2011)

ReconstructMe

Both the depth image and color image information from the Kinect System is used by ReconstructMe. Since ReconstructMe offers native plug-and-play compatibility with the Kinect System, making scans was a breeze. The built-in 3D-Selfie function of ReconstructMe was the perfect fit for my project. It detects automatically when the person in front of the Kinect System rotates a full 360 degrees and stops recording the scan. During the processing phase, ReconstructMe stitches all holes in the mesh, shrinks the 3D-Scan and slices the upper body, so that if you would like to 3D-Print your scan, you could just save it and would be ready to 3D-Print it. (More information about the 3D-Selfie function can be found here: http://reconstructme.net/2014/04/24/reconstructme-2-1-introduces-selfie-3d/)

3D-Selfie scan after ReconstructMe processing

3D-Selfie scan after ReconstructMe processing

Meshlab/MeshlabServer

Meshlab is an open source program which allows you to process and edit meshes. A mesh is collection of for example triangles which build a three-dimensional structure. Available is either the program version with a GUI or MeshlabServer. The special thing about MeshlabServer is the possibility to create a script with all the filters you would like to apply to your mesh and then start this script via the command console. For the sake of automation this approach is of course the favorable.

The filters used in my script are rotating and increasing the 3D-Scan from ReconstructMe to make it better visible on the 3D-Monitor and another filter is reducing the amount of triangles in the mesh by half. The quality reduction is almost not visible, but the file size and therefor the time needed to save the modified mesh is also reduced by half.

Meshlab

Meshlab

Meshlabserver

Meshlabserver

Tridelity MV5500 3D-Monitor

The modified 3D-Scan is shown to the visitor on an autostereoscopic 3D-Monitor. What is autostereoscopy you might ask? It’s a technology which enables the viewer to see a three-dimensional picture without the need for 3D-Glasses or similar equipment.

This effect is achieved with a parallax barrier, a barrier mounted in front of the LCD panel which only allows the viewer to see one picture for the left eye and one for the right eye. By chopping the 3D-Scan into small slightly shifted vertical lines, the brain stitches these pictures together and makes you see a 3D-View. This effect works best at a specified distance and since the monitor supports MultiView, up to 5 people can view the 3D-Scan at the same time from different angles. Depending on the angle to the monitor, the viewer sees the 3D-Scan more from the front or from the side.

Parallax Barrier schematic (Source: Muchadoaboutstuff, 2013)

Parallax Barrier schematic (Source: Muchadoaboutstuff, 2013)

AutoHotKey & Gulover´s Macro Creator

To automate keyboard entries and mouse clicks in the used programs, AutoHotKey was the tool of choice. It lets you automate every thinkable action in the Windows OS and every program running in it. It features IF/ELSE, Loops, a PictureSearch where you can search for a specific detail on the screen and if it’s found fire another function and many other tasks.

Gulover´s Macro Creator is a freeware program which offers a GUI for all functions and tasks of AutoHotKey. This makes working with and programming scripts in AutoHotKey much more time efficient and easier.
This was the description of the utilized hard- and software, now let me explain the automation script in detail.

The automationscript

The script first starts ReconstructMe and the Tridelity Software which loads the last saved 3D-Scan and outputs it on the 3D-Monitor. Then via PictureSearch ReconstructMe is scanned for an error message which appears when the Kinect Sensor is not recognized. If the error appears, the user is asked via a prompt to solve the problem with instructions given.

Next a 3D-Selfie scan is started and when finished the user and the visitor are asked if they like the result or if another scan should be started. If they are content, the scan gets saved and overwrites the last saved scan.
Now the visitor gets asked if he would like to save the scan in an extra folder. If this is wished for, the visitor gets to enter his name which is then saved with the current date added into a specific extra folder. The visitor could now copy the scan on an USB-Stick and take it home and modify it or directly print it with a 3D-Printer.

The scan is now getting modified by the Meshlab script running in MeshlabServer. After the filters are applied the now modified scan gets saved as a .obj file which in the next step can be opened by the Tridelity 3D-Monitor software. This software outputs the 3D-Scan and rotates it so that the visitor can view himself in all different angles. As you can see in the pictures, the 3D-Scan created in ReconstructMe is in color while the output on the 3D-Monitor is only in shades of grey. This is because the 3D-Monitor can only play back .obj files which are unable to store color information. ReconstructMe on the other hand offers four possible output file formats! (.ply, .stl, .obj, .mtl)

After a defined time the user gets asked if he would like to stop the script which would stop all running programs and then itself or if another scan is wanted in which case the script jumps back to the 3D-Scan routine and starts all over.

Thanks to a sponsored license for ReconstructMe which lowers the processing time after the completion of the scan by forty seconds, one complete pass of the script from the start of the 3D-Scan to the final output on the 3D-Monitor takes two minutes and twenty seconds. Definitely a time visitors are willing to wait and ask questions while their scan is being prepared for their viewing pleasure.

I would like to take this opportunity to once more thank the whole ReconstructMe Team, especially Mr. Rooker, and my UAF bachelor thesis supervisor FH-Prof. Dr. Kubinger for their support and valuable input.

ReconstructMe 2.2.940 Released

Today we are happy to announce a new release of ReconstructMe UI and ReconstructMe SDK. The new UI supports 64bit and supports saving OBJ with texture as outlined in our previous post. We’ve also made the surface scaling in Selfie 3D mode optional. The SDK release brings a couple of bug-fixes for x64 bit support and a better tuned texturing parameters.

Head over to downloads section to grab ReconstructMe.

ReconstructMe SDK with UV Texture Support and x64 Bit Support

Today we are happy to release an update for ReconstructMe SDK. The past couple of months we have worked hard on adding support for the two most requested features. One is that the new SDK now is able to export UV texture maps and the other improvment is 64bit support.

UV Texture Mapping

Previously, ReconstructMe was not able to export colorized meshes in .OBJ file format, due to the fact that this format does not support vertex colors. We have now adopted our pipeline to automatically convert from vertex colors to UV texture mapping when exporting as OBJ.

As simple as this may sound, this is not a trivial conversion and requires some hard thinking to get things right. The involved steps include unfolding complex 3D-dimensional shapes onto a 2D disc in such a way that minimal visual distortions appear plus rearranging individual portions of the unfolded structure in such a way that emptyness in the texture-space is avoided.

On the left side of the image above you can see a final textured mesh. The green lines indicate the texture seams, i.e the cuts that were made virtually to unfold the mesh to a disc like structure. On the right side the texture map is displayed.

We consider the current development progress beta. So when you use that feature expect some glitches, such as increased computation time and visual artefacts along texture seams.

x64 Bit Support

As promised we, from now on, release 32bit and 64bit versions of our SDK side by side. You should consider switching to the 64 bit version when processing huge models that come with a large memory footprint when processing them. The 64 bit allows the SDK to address more memory and complete the processing in such cases.

You can download the updated SDK here.