Thanks to all that contributed in our feature survey on Google+. We adapted our internal road-map due to it’s results and will head for texture support and an easy to use graphical user interface next.
Even though it’s still early days, we’d like to share what can be accomplished using our texturing engine. We are aiming at a semi-automatic solution where users are attaching textures through a post-processing step. This allows us to use arbitrary cameras for texturing the mesh. Our engine supports multiple textures and combines them smoothly.
The first image below shows the plain surface reconstructed in high resolution mode. Next to it an image taken with an Casio Exilim ZR 100 to be used as texture. On the right the final textured mesh is shown.
The best part about it, it took less than 3 minutes from scanning to texturing for the above scenario. We will probably release this feature alongside with the first version of the UI around late summer.
Enjoy!
hello sir i am not able to load the software after all installaton processes applied where i am doing wrong help me
goto the forum https://groups.google.com/forum/?fromgroups#!forum/reconstructme
Interesting, will the texture align with the mesh just as it would of if you had used its internal (but low res) rgb camera? Is there manual alignment involved?
There was a manual alignment step involved: selecting correspondences on the mesh and on the image (4 pairs), in order to estimate the camera position relative to the 3d mesh. When using the intrinsic RGB camera this is should not be necessary (extrinsic calibration)
Congrats YOU ROCK!!!!
Thank you for listening us!!!!!!
You’re welcome!
well what a great start :-)
Remember when combining from pictures from different viewpoints to weight the solution towards the faces with normals more aligned with the camera.
You can also attempt to take out the albedo lighting to get unlit textures before projecting, or at least as a mechanism to combine via weights, pixels from multiple textures.
Thanks Mark! We well know that lighting and perceived color is a beast on its own. We are thankful for every input we can get. In case you must read references, please send them in (info@reconstructme.net)
Thanks
Fantastic! Looking forward to this new feature with eager anticipation. While it took 3 minutes on your super high powered PC, it may take a little longer for those with more humble computers?
I don’t know too much about surface reconstruction but could you use photogrammetry to build a second mesh and use ICP to align the two? The photogrammetric model can be thrown away and only the computed camera positions use to texture the mesh. Just a (wacky) idea
Charles, could you elaborate your idea? I Couldn’t follow it.
I’ll put new post on Google Group
Will this ever be possible as an automatic process using the kinect’s color camera?
Probably yes, but it is not our primary focus.
Will the texture export with the .obj option? Or will texture be
for .ply only?