We continue to release after each development sprint. So here is our new release with three new features.
- Added individual resolutions per dimension.
- Automatic correction of integration truncation when value is too low.
- Added
gradient_step_fact
parameter to control the smoothness of neighboring normals.
Adjusting resolutions per dimension will allow you to add more detail into dimension that require it (think about relief scans or similar non-square volumes). To adjust the settings look at the packaged configuration files that ship with ReconstructMe. Backward compatibility to older configuration files is broken, please update accordingly.
The second feature allows to detect misconfigurations of the integrate_truncation
parameter. This parameter depends on the volume size and resolution. If a value below the minimum is detected, the truncation is clamped to the minimum value.
Below is screenshot of a high resolution keyboard and hand scan.
Here’s the configuration
camera_size_x: 640 camera_size_y: 480 camera_fx: 571.26 camera_fy: 571.26 camera_px: 320 camera_py: 240 camera_near: 100 camera_far: 2000 volume_size { x: 1024 y: 1024 z: 128 } volume_min { x: -250 y: -250 z: 400 } volume_max { x: 250 y: 250 z: 600 } integrate_truncation: 5 integrate_max_weight: 64 icp_max_iter: 20 icp_max_dist2: 200 icp_min_cos_angle: 0.9 smooth_normals: false disable_optimizations: false extract_step_fact: 0.5 gradient_step_fact: 0.5
really really awesome Kinect app.
Thanks!
HI,
Here is my result on the ReconstructME VS Zbrush
http://www.zbrushcentral.com/showthread.php?164848-Skaale-sketchbook-2012&p=944234#post944234
what is the difference between Kinect for xbox and Kinect for windows?
very interesting application. Is there the possibility of changing the depth sensor of kinect with another more powerful(maybe laser) to scan great distances? However another question: Could RecME interfaced with a library of 3D models that may enable the function of recognition of real objects?
To answer your first question: this is not directly supported, but may be implemented via the replay functionality.
Second question: Yes we use PROFACTOR object recognition to detect objects in the live stream of the data. See our frontpage video.
Best,
Christoph
Good. Another thing(sorry): Have you ever heard about RaspBerryPi? Link: http://www.element14.com/community/groups/raspberry-pi
I strongly suggest the developing of a porting of ReconstructMe for this board, hoping specs could manage it! :)
I know this device and I can image it servers a slim frontend for a mobile solution, streaming data to a background server doing the actual reconstruction. RaspBerryPi is not outfitted for real-time reconstruction.