Idea Contest – Win a 3D Printer!

makerbotrep2Hello everyone!

We are proud to announce the first ReconstructMe idea contest, where your idea can win a 3D printer and other great prizes. To participate, all you need to have is a good idea. We’ve put together a short document describing the contest, the evaluation criteria and other things you need to know to get started.

Please note that the closing data is 15th of June 2014 19th of June 2014. In case your submission contains larger files, please upload them to third party services and refer to the material by linking.

Looking forward to see your submission!

ReconstructMe Large Scale Reconstruction – Development Insights

Recently we kicked off the development of a new feature. Large scale reconstruction. Our vision is to enable users to reconstruct large areas with low cost sensors on mobile devices. This post shows the initial developments in boundless reconstruction.

Many approaches could be applied to enable this feature. After an evaluation we finally decided for a solution which integrates nicely into ReconstructMe. That is, we translate the volume along canonical directions and keep track of the camera position in world space. Once we determine how to shift, we need to figure out when to shift.

We decided to go with the concept of what we call trigger boundaries that are relative to the volume. When a specific point crosses this boundary, the volume will be shifted. The first approach was to position the camera at the center of the volume. Once the camera position crossed the boundary, the volume was shifted. We found that this concept did not perform ideally, since the data behind the camera is allocated but most likely not captured and thus wasteful. After evaluating different options, we settled with the idea to specify the trigger point as the center of the view frustum in camera space. Again, when the trigger point crosses the trigger boundary the volume is shifted into the dimension of the cross occurence.

LargeScaleVolumeShift

In testing we faced the issue that ReconstructMe requires decent computation hardware and its rather tedious to move around with a full blown desktop PC or gamer notebook. Luckily, a old feature called file sensor helped us to speed up testing and data acquisition. Recording a stream of a depth camera does not require a lot of hardware resources and can be performed on Windows 8 tablets (Asus Transformer T100 in this case).

The streams were used to test the approach and to extract a colorized point cloud of the global scanned area. The tests showed a drift of the camera which was expected. Nonetheless, ReconstructMe is able to reconstruct larger areas without any problems if enough geometric information is available.

Based on our initial experiences, we plan to invest into more research in robust camera tracking algorithms, loop closure detection and mobility. Additionally we will need to settle with a workflow for the user interface and at the SDK level.

 

A 3D scanner for Hunt Library (Part 1)

by William Galliher

Hello everyone, we are the 3D Scanner team from North Carolina State University! For our senior design project, we are constructing a full body 3D scanner station for the Hunt public library here on campus. Hunt Library has a technology showcase and a Makerspace with 3D printers, and they are dedicated to showing what this technology can do. Unfortunately, many people who come in to the library do not know what this technology is capable of, and many printers take quite a while to print.

To solve this problem, and show off what can be done with the technology in this field, our team has decided to construct a 3D scanner that will scan a patron in less than two minutes. The booth will show the scan that is taking place and export the scan as a .STL which can be printed in the Makerspace within the library.

As part of the planning stage for this project, we spent time evaluating alternatives in hardware and software. In doing so, we came across the video for ReconstructMe 2.0 which showed two sensors doing a fast, real time scan of someone in a chair spinning around. At that point in time, our design encompassed moving a sensor up and down to get multiple levels on a scan. Multiple sensors along with the fast record time would allow us to move our original goal of a scan within five minutes down to below two, and so we decided to go with ReconstructMe.

The video that made the decision on ReconstructMe We are in the midst of constructing our station now, and recently got multiple sensor support and export of files completed. We have rapidly approaching alpha and beta demos, in addition to a final design day in late April. We will be back soon after that to show our final product. Thanks for reading, and see you again soon!

All images courtesy of http://lib.ncsu.edu/huntlibrary

ReconstructMe 2.1 introduces Selfie 3D

We are thrilled to announce that today’s ReconstructMe release includes a Selfie 3D technology, a feature that allows you capture 3D printable self-portraits. We developed Selfie 3D so you could simplify the process of generating 3D printable busts of yourself and your friends, just like the one below.

The current Selfie 3D feature is best used for generating head-to-shoulder busts. To use it, simply activate Selfie 3D and turn in front of your camera. The tutorial covers the basic steps and has some invaluable tips and tricks for creating the best possible busts. So don’t miss it!

Is the output directly printable?

Yes! Post-processing is fully automatic. We’ve put a lot of efforts into the automatic post-processing of your scan. Here’s what happens behind the scenes.

Making it watertight
Watertight refers to a property of 3D meshes that allows the 3D printer to determine the inside and outside of meshes. A mesh without holes is often referred to as watertight, because when you fill-up the inside with water, no water will drop out. ReconstructMe will enforce this property.
Creating a planar stand
ReconstructMe slices the model on the bottom to generate a nice planar stand of your bust, so that it does not fall over when being put down.
Fixing the orientation
ReconstructMe will place the origin of the model on the center of the base of the bust with positive z-direction pointing upwards towards its head. This will allow you to directly import the bust in your favorite 3D printer application or printing service and your bust should already be placed on the printer’s virtual platform.
Scaling it down
Since your 3D printer won’t be able to print you in full-size, ReconstructMe scales your model down to 20cm when saved. Note, the saved model dimensions are in millimeters.

Need even more info? Make sure to check out our blog post about ReconstructMe SDK 2.1 and stay tuned for further blog posts covering this feature.

How long does post-processing take?

Usually, post-processing takes between 15 and 25 seconds. The time increases on a low-powered machine or when you don’t have a ReconstructMe license. We used the Selfie 3D feature during the Long Night of Research, were we gathered over 150 scans in 3 hours. To view all scanned models, click the image below.

montage

Amazing, isn’t it? Tell us about your favorite scan! And don’t forget to download.

Download 2.1.348 for Vista/7/8

Weight Watching using ReconstructMe

by Francois Chasseur

Who would have thought that 3D scanning and printing could have a therapeutic effect? We recently discovered that quality per inadvertence.

It’s difficult to see our body changing, because we see it every day in the mirror. Even if the loss or gain of weight is great, it remains tricky to actually see it. Photos are often unrealistic due to light, exposure, camera, how you are posing.But you can’t lie to a 3D camera. For his huge weight loss, Sebastien called the services of VOUSen3D to make 3D-models of himself, which we achieved by building full body scans, using ReconstructMe.

Sebastien

Sebastien

In July 2013, Seb had a bariatric surgery (gastric bypass) and wanted to keep a more explicit memory than photos of his ‘rebirth’ as he says, his ‘refurbishing’, his new life. We met with him one week before his surgery, and every month since. After each scan he prints his new figurine out. You can’t ignore the evolution when they’re all standing in a row. These figurines are helping Sebastien a lot in his ‘psychologic’ healing: he can touch and see his progressive departure from obesity. He really knows where he’s coming from, where he’s going. When he’s looking at himself in the mirror, he doesn’t see an obese man anymore. He sees a healthier, more attractive man, with great self-esteem – which he is!

We wish you much courage Seb, congrats on the 50 lost Kg – only 30 more to go ;-) Enjoy your new life!

All images courtesy of Francois Chasseur and Sebastien

ReconstructMe SDK 2.1 Released

Hello everyone! We are proud to announce the availability of ReconstructMe SDK 2.1. The new version brings a lot of new features and fixes many issues we have encountered of the past months. You can get the latest version here.

The major changes are

Post-process your scan with CSG

We have added a constructive solid geometry (CSG) module that allows you to post-process reconstruction data using manipulative operations. These operations allow you to perform intersections, unions or differences between volume content and other objects. Currently, we support the following primitive objects

  • spheres,
  • axis aligned boxes,
  • and planes.

In the screenshots below you can see CSG operations (union, difference, intersection) applied to a box and a sphere.

csg_operations

All operations are performed on the volume and generate intersection free meshes when extracted. Additionally, we’ve added support for complex meshes. This allows you to easily add stands or cut your reconstruction. We used this feature extensively for automatically generating printable busts such as the example below.

We will go into more detail on how to automatically generate busts in upcoming blog posts next week. In the mean-time check out the CSG example for usage.

Use any RGBD sensor

Another feature we turned our development efforts on is to allow users to share a 3D camera resource with ReconstructMe. This means, that ReconstructMe does not enforce you anymore to use the sensors that ship with it, but you can roll out your own sensor implementation and provide ReconstructMe only the data it requires for reconstruction (this usually boils down to sensor intrinsics, a depth map and optionally an color image).

Using an external sensor allows you to use yet unsupported sensors in ReconstructMe. You might want to have a look at the external sensor example for details.

Browse the volume directly

Until now ReconstructMe SDK did not provide any means of directly navigating through reconstruction volume content. Users always were forced to explicitly generate a surface from the volume andsurface afterwards. This has now changed thanks due to the volume viewer. It pretty much boils down to placing the following lines in your code

reme_viewer_t viewer;
reme_viewer_create_volume(ctx, volume, sensor, "My volume viewer", &viewer);
reme_viewer_wait(c, viewer);

180 Amazing 3D Scans from the Long Night of Research 2014

Hello everyone! We had a great Long Night of Research last Friday with more than 300 people visiting us at PROFACTOR. And we scanned most of them! More than 180 people agreed to upload their scans to our site. Here’s a good example of what those scans look like.

Over the weekend we uploaded all models to our site and rolled out our very own 3D web-based viewer. The viewer should work with most modern browsers (expect probably Internet Explorer). If you see an error message when clicking the above play button, you probably need to update your internet browser or switch to a different one.

Entire collection

To view all scanned models, click the image below. Note those are rendered 3D scans, not photographs – no manual post-processing applied.

montage

Amazing, isn’t it? Tell us about your favorite scan!

Words on post-processing

As promised, we’d like to mention a couple of things for all the people who want to download and post-process their files. First of all, model files can be downloaded through our 3D viewer. For storage reasons ReconstructMe uses the OpenCTM file format for web downloads. In order to view or post-process the model, you will therefore need a tool that can handle OpenCTM files.

Although there many tools around we recommend to use Meshlab. It runs on all major platforms such as Windows, MacOS and Linux. Meshlab provides conversion between many file-formats and it is free to use.

Some people mentioned that they would like to turn their virtual models into 3D printed busts. You can do this by ordering your 3D print at a 3D printing services. In the recent years many such services have been created. The most famous ones are Shapeways and Sculpteo.

License

Unless otherwise stated, all 3D model files are licensed under CC BY-NC 4.0. This means you can share or adapt them as long as you give appropriate credit and don’t use the material for commerical purposes.

We’d like to conclude with another great scan.

ReconstructMe and Multicopters (Part 1)

by Gerold Huber

In the course of my master thesis in the fields of Robotics and Automation at the Johannes Kepler University in Linz (Austria), I am working at the PROFACTOR (the creators of ReconstructMe) since last summer. We wanted to set up a quadcopter for full autonomous tasks without external markers/sensors/cameras. The main challenge in this set-up is position estimation and tracking position accurately. After some struggling with a 2D flow sensor to determine the position, we decided to use a depth sensor in combination with the ReconstructMe SDK for position tracking.

Therefore, I implemented a mobile version of ReconstructMe. As you can see in the Picture about the system architecture below, a high level board streams the sensor information via WiFi to a groundstation running ReconstructMe. There ReconstructMe is used to build a global model of the sensor data. As a benefit, ReconstructMe provides complete tracking information of the camera to the high level board. The Flight Management Unit (Microcontroller) than uses this Position estimation from ReconstructMe in a raw manner to stabilize the multicopter.

MulticopterSystemOveriew

The entire process chain (sensor – data streaming – reconstruction – tracking streaming) takes about 120ms. It turns out, that this mobile version of ReconstructMe is sufficient for control and stabilization of a quadcopter as can be seen in the video below. Note that all cables visible are for power supply purposes.

Hardware used:

Software used:

Extending ReconstructMe with Measuring Capatibilities

by Sabine Winkler

We are a group of students from the Upper Austria University of Applied Sciences in Hagenberg (Software Engineering) and have teamed-up with PROFACTOR, the creators of ReconstructMe. For the last two semesters we have been working hard to create a tool that enables you to calculate the perimeter of an object as well as the distance between two points on the surface of the model.

The image below shows a slice through an human head bust and the resulting paths. All measurements are evaluated in real-time.

Measuring using a slicing Plane. All paths with their corresponding perimater are evaluated seperately.

Measuring using a slicing Plane. All paths with their corresponding perimater are evaluated seperately.

As you can see, it supports the calculation of multiple perimeters when cut with a plane aligned to two of the axes. The tool is able to recognize these perimeters completely independent from each other and highlights them in different colors. Think of it as a tool that can be used to load models from your file system, currently supporting PLY and STL file formats. This works by selecting a file from your computer or by passing a command-line argument.

Additionally, you can put two points anywhere on the surface of the loaded model to calculate the geodesic distance (the distance on the surface) between these points.

Geodesic distance between two selected points and the corresponding path.

Geodesic distance between two selected points and the corresponding path.

This can be especially handy e. g. for an orthopaedic specialists or anyone who wants to take certain measurements in ReconstructMe scans. We would like to thank PROFACTOR for letting us be part of the development process of new features and for providing us with support during this time.

A note from PROFACTOR
We are currently evaluating scenarios of adding this application, or functionality of it, to ReconstructMe and/or ReconstructMe SDK. If you happen to have the need for such a feature, we would love to hear about your needs, so we can design the functionality in best fitting sense.

ReconstructMe at Long Night of Research 2014

The ReconstructMe team will once again take part in Austria’s Long Night of Research. If you are interested in experiencing our 3D photo booth or just want to converse with us, then join us on the 4th of April 5-11pm in one of the following locations.

PROFACTOR GmbH
Im Stadtgut A2
4407 Steyr-Gleink
Get directions

FH Technikum Wien
Höchstädtplatz 6
1200 Wien
Get directions

2012 was the first time we showed ReconstructMe at the Long Night of Research in Vienna and Steyr and it was such a great experience:

We had around 300 people visiting and exploring our robotics, nano-tech and chemistry labs. At the ReconstructMe booth we had a lot of fun reconstructing our visitors and enjoyed the in-depth discussions with potential users and developers. What really surprised us was how quickly people learnt to use this new technology to scan their family members. – Christoph Heindl

We are looking forward to meeting you there!

What’s new?

As you may have noticed we’ve reworked our homepage. We’ve re-organized most of its content in a hopefully clean and easy to read form.

Help center
A new help-center, which you can find by clicking Support in the main menu, allows you to browse and search our knowledge database. We intend to extend our knowledge base massively over the coming weeks.

Forums
Besides, we decided to move away from our Google-groups channels and instead integrate forums directly into to our homepage. This allows anyone (even without registration) to converse with other users and the ReconstructMe team.

Product variants
We stripped down our product variants to basically two: a single seat license for users and one for developers. All other inquiries, such as volume and educational licenses, are handled in direct interaction with our sales team.

Technical News
Besides reworking our homepage, we’ve been busy with improving ReconstructMe in specific areas. In the coming weeks we intend to blog about our latest changes, including (but not limited to) new sensors, large scale reconstruction and drones using ReconstructMe for stabilization and reconstruction.

ReconstructMe & 3D Artist Magazine

3D Artist is an authoritative and well-respected source of inspiration for those working or aspiring to be working in the CGI industry. Every product in the 3D Artist portfolio features guides for awe-inspiring images, interviews and career advice from industry insiders and behind-the-scenes access on major 3D projects. 3D Artist covers all software and disciplines.

For this reason we are glad, that the magazine choose also ReconstructMe for his latest article. Grab your issue here!

Here is a short preview!

3DscanningREME

ReconstructMe and IPO.Face

On Batch Scanning People

By Martin Lang, IPO.Plan GmbH

Normally our company focuses on unique planning tools, interactive process visualisations and specialised services for factory and production planning. We are also using commodity depth sensors mounted on robots that help us gather valuable 3D data from the factory floor. Utilising the invaluable ReconstructMeSDK we are able to directly transfer parts of real production environments into our planning software.

Once in a while somebody needs some distraction. So we thought: Hey, let’s scan people for a change!

blog image ipoface

IPO.Face

Building on our know-how with robotics and the ReconstructMeSDK, we planned, constructed, and implemented an upper body people scanner in less than 40 days.

The basic concept is to quickly move a Primesense 1.09 sensor in a predefined pattern around a person’s head while the reconstruction algorithm handles tracking, reconstruction and surface generation. A post-processing step applies some plane cuts and adds a premodelled base to the mesh, yielding a 3D printable bust.

The mechanical structure of IPO.Face is built from standard of-the-shelf items, with special parts being laser-cut from aluminium or 3D printed (ABS). Servo motors rotate a circle shaped guiding and move an attached sledge with the mounted Primesense along that guiding. Motion control and the ReconstructMeSDK are handled by a single software. Thus with the help of accurate positional sensors we are able to recover the reconstruction pose in real time if sensor tracking is lost.

After trials we settled for a 20 sec movement pattern which gave us good results for most face features including chin and complex haircuts. During an in-house exhibition we scanned 70+ persons. They all received their 3D digital models and, as an option bound to a charitable donation, were later provided with a 3D printed bust as well.

blog image prints

ReconstructMe makes a Wedding Cake Topper

by Steve Dey

It was October 2013, I finally had my 360 degree scanning turntable based on Fredini’s excellent design and driven by a high torque 3 rpm rotisserie motor.  I had a Primesense Carmine 1.09 close range scanner which after months of disappointing results was working perfectly thanks to a driver upgrade.  I had a new PC in which lurked a beast of a graphics card – the NVIDIA GeForrce GTX Titan (I do a lot of work with video and 3D graphics).  I had the latest version of ReconstructMe 2.0.199 which I was convinced would deliver the most accurate models and the quickest workflow. To print the models I had a home-built original Ultimaker.

topper1

My 3D scan and print studio was open for business!  So who to scan first?  My young children would not stand on my turntable for more than 10 seconds, they were justifiably weary of being guinea pigs for tests and trails.  My wife though supportive was not ready to see a mini version of herself.  In my excitement to get the technology working I had not considered the implications of being confronted by a 360 degree model of oneself.  As it turned out most men are not in the slightest bit bothered by this  idea other than the odd query about 3D airbrushing a six-pack. Kids universally loved the idea of a mini version of themself. Women seemed to fall into two camps either, “scan-me-all-day I love it” or “do not even ask me to step on that turntable”.

After a dozen or so scan and prints I had worked out the workflow and ironed out most of the issues. I found memory issues sometimes as the close range scanner seemed to build huge models which sometimes would not fully export from ReconstructMe. I found by keeping the scan volume large enough and the distance of the scanner right I could avoid this problem. I beleive the ReconstructMe team are investigating this but I am getting good enough results for now. Another problem I had was with the security software BitDefender, after installing this software my PrimeSense and my backup Kinect scanner failed to work and not just with ReconstructMe so I switched to another security solution and all was fine.

Then one day a wedding invitation arrives from our good friends Carl and Emily and after a discussion about the arrangements someone came up with the idea of the 3D scan and printed wedding cake topper.

topper2 topper3_1

At first I tried scanning the couple together, the turntable is large enough to do this but I soon realized that there were too many scanning blind spots. So we scanned them individually and the results were good. Using Blender I cleaned up any defects and reunited the happy couple on a platform but then how should they be orientated face-to-face or side-by-side? I used Sketchfab to share the models online but the happy couple still could not decide so we printed a few versions.

On the wedding day the cake attracted a lot of attention, mobile phones and cameras were trained on the cake, lots of questions were asked. Emily and Carl were pleased to have something unique at their wedding and for most of their guests this was their first exposure to 3D scanning and printing. The attention it generated continued after the wedding day, I was interviewed by local newspapers and then I appeared in a national newspaper. Then I was on a local radio station explaining how 3D printing and scanning worked. Unsurprisingly I am getting requests for new models now but setting a price for these has proved difficult given the effort involved. I think plastic has an association with being cheap despite the fact that these models are unique. So I am investigating colour 3D printing to see if people would value this more.

for Emily and Carl