Tech Preview – Surface Texturing

Thanks to all that contributed in our feature survey on Google+. We adapted our internal road-map due to it’s results and will head for texture support and an easy to use graphical user interface next.

Even though it’s still early days, we’d like to share what can be accomplished using our texturing engine. We are aiming at a semi-automatic solution where users are attaching textures through a post-processing step. This allows us to use arbitrary cameras for texturing the mesh. Our engine supports multiple textures and combines them smoothly.

The first image below shows the plain surface reconstructed in high resolution mode. Next to it an image taken with an Casio Exilim ZR 100 to be used as texture. On the right the final textured mesh is shown.

The best part about it, it took less than 3 minutes from scanning to texturing for the above scenario. We will probably release this feature alongside with the first version of the UI around late summer.

Enjoy!

14 thoughts on “Tech Preview – Surface Texturing

  1. vallurusuresh

    hello sir i am not able to load the software after all installaton processes applied where i am doing wrong help me

    Reply
  2. Michael

    Interesting, will the texture align with the mesh just as it would of if you had used its internal (but low res) rgb camera? Is there manual alignment involved?

    Reply
    1. Christoph Heindl Post author

      There was a manual alignment step involved: selecting correspondences on the mesh and on the image (4 pairs), in order to estimate the camera position relative to the 3d mesh. When using the intrinsic RGB camera this is should not be necessary (extrinsic calibration)

      Reply
  3. mark

    well what a great start :-)
    Remember when combining from pictures from different viewpoints to weight the solution towards the faces with normals more aligned with the camera.
    You can also attempt to take out the albedo lighting to get unlit textures before projecting, or at least as a mechanism to combine via weights, pixels from multiple textures.

    Reply
    1. Christoph Heindl Post author

      Thanks Mark! We well know that lighting and perceived color is a beast on its own. We are thankful for every input we can get. In case you must read references, please send them in (info@reconstructme.net)

      Thanks

      Reply
  4. charles

    Fantastic! Looking forward to this new feature with eager anticipation. While it took 3 minutes on your super high powered PC, it may take a little longer for those with more humble computers?

    I don’t know too much about surface reconstruction but could you use photogrammetry to build a second mesh and use ICP to align the two? The photogrammetric model can be thrown away and only the computed camera positions use to texture the mesh. Just a (wacky) idea

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *