Author Archives: Martin Ankerl

About Martin Ankerl

I am a Software Engineer, and love to develop cool algorithms. At PROFACTOR I work on multiple research projects, mostly vision and robotics related. One major example is the IRobFeeder which is an automatic bin picking system where I work mostly on object recognition algorithms. I have a private blog too.

Add+it 2015: Symposium of Additive Manufacturing and innovative Technologies

Use the opportunity to discuss with numerous international experts from 10 countries and 3 continents, both from science and industry, what 3D printing technologies offer today and what they can be expected to offer in the future.

20150806_Addit_Programm_WS_1

What is special about Add+it 2015?

Workshops provide the opportunity to interact with participants and experts; discuss relevant 3D printing issues and initiate possible further business cooperations.

The Add+it 2015 is organized by PROFACTOR and IPPE, the Institute of Polymer Product Engineering at the Johannes Kepler University Linz.

Registration and Fees

  • Further information on venue, programme and registration is available on the conference website.
  • The registration form should preferably be completed online. Deadline for registration: August 20, 2015

Early registration with a discount of € 30,- is extented until August 10, 2015!

ReconstructMe with Glasses

Inspired by ideas of MagWeb, Tony Buser has done an awesome glasses mod with the Kinect and ReconstructMe. He got +2.5 reading glasses and fixed them in front of the kinect. It is a bit difficult to get a complete scan with these as the image is distored, but the result looks really excellent:


The mod looks really cool, I am amazed that this actually works… It definitely shows that adding a lens to the Kinect might be a good solution to be able to scan small parts. The difficult part will be to provide a calibration method that can successfully undistort the data.

Here are the settings Tony Buser has used for this scan:

camera_size_x: 640
camera_size_y: 480
camera_fx: 514.16
camera_fy: 514.16
camera_px: 320
camera_py: 240
camera_near: 100
camera_far: 2000
volume_size: 512
volume_min {
x: -250
y: -250
z: 400
}
volume_max {
x: 250
y: 250
z: 900
}
integrate_truncation: 10
integrate_max_weight: 64
icp_max_iter: 20
icp_max_dist2: 200
icp_min_cos_angle: 0.9
smooth_normals: false
disable_optimizations: false
extract_step_fact: 0.5
cutScreenS4

Character Creation with a ReconstructMe Scan

We have created a full body scan one of our coworkers while using a bigger volume, and he used this as the basis for a character animation.

  1. The most difficult part of the scanning process was to stand still and not move the arms. We solved this problem by letting the model hold a broomstick in each hand :) This data was later removed from the CAD scan.
  2. To animate the skin of the character a biped system from 3Ds Max was used.
  3. Finally, With Motion Mixer from 3Ds Max, several BIP files were loaded to affect biped motion.

The result looks quite stunning! Here is a video with the result:

Here are some more screenshots:

Public Release on February 27th

Start your engines, connect your Kinect’s, oil your swivel chairs: The ReconstructMe team, powered by PROFACTOR, proudly announces:

ReconstructMe will be publicly released on
February 27th, 14:00 CET

On Monday everybody, which probably includes you, will be able to scan and reconstruct the world. ReconstructMe will be free for non-commercial use. Contact us for commercial interests.

Big thanks fly out to our BETA testers that made releasing in time possible! They provided valuable feedback throughout the entire BETA program and without them, we wouldn’t have reached the robustness and usability we have now.

Here’s to our Beta testers!

Two More 360° Scans

Continuing the scans of the ReconstructMe team, here is Christoph Kopf and Matthias Plasch. Feel free to print away as much as you like ;-) If you do so, please send us pictures!


Christoph Kopf


Matthias Plasch

Three 360 Degree Upper Body CAD Models Reconstructed

We have just recorded three of our colleagues and created STL models from them. This time we made a full model of ourselfs by rotating around the camera. One guy was sitting on a chair and rotated around, while the other one moved the Kinect up and down so we could get a full model of the front, back, and also the top. If you own a 3D printer or 3D printing software, we would very much like to know if these models are good enough for 3D printing! Please post any comments here. For everyone who made it on the BETA program, there is also the datastream so you can create the STL model yourself. We have post-processed the STL with Meshlab by re-normalising the normals, and converted them to binary STL.
Continue reading

We Scanned a Chair in two Minutes

We took some time (about two minutes or so) to create a CAD model of a car’s chair. Below you can download the original mesh in high resolution, and a reduced one. The colorization show the normals.

Downloads as STL models:

{filelink=4}
Original resolution: 533,317 vertices and 1,047,917 faces, 20.8MB

{filelink=3}
Low resolution: 27,683 vertices and 52,394 faces, 1.15MB

Creative Commons Licensechair by Martin Ankerl is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Room, Scanned with ReconstructMe in a few Seconds

We made a quick scan of our room by moving the Kinect around, and extracted the mesh that was created in realtime as an STL. Here is what it looks like:

The original mesh is quite large–it has 578,732 vertices, 1,110,366 faces. All this was created in realtime.

You can download the generated STL files here, and we also have a reduced version to save your bandwith:

{filelink=1}
Large, unfiltered version with 578,732 vertices and 1,110,366 faces. 21.6 MB

{filelink=2}
Reduced version with 31,033 vertices and 55,517 faces. 1.21 MB

Have fun :)