3D reconstruction of object is an interest subject in computer vision. A method for 3D reconstruction of object was proposed by integrating a set of depth maps obtained by Kinect sensor. To aggregate the contributions of the depth images at their corresponding scale, the depth images were triangulated in image space firstly, and the next step was to insert the triangulated depth images into the hierarchical signed distance field, then the global Delaunay tetrahedralization was applied to all the voxel positions yielding a convex hull that covers all the voxels, and the marching tetrahedra algorithm was applied to the resulting tetrahedral mesh for extracting the surface. Experimental results show that this method can make use of 37 depth images by Kinect sensor at different directions with the resolution of 640480 to extract high-quality surfaces, which only costs 48 s.
Key words
depth images fusion /
computer vision /
signed distance field /
3D reconstruction /
Delaunay tetrahedralization /
Marching Tetrahedra
{{custom_keyword}} /
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
References
[1]Paris S, Sillion F X, Quan L. A surface reconstruction method using global graph cut optimization[J]. International Journal of Computer Vision, 2006, 66(2): 141-161.
[2]Zaharescu A, Boyer E, Horaud R. Transformesh: a topology-adaptive mesh-based approach to surface evolution[M]. Berlin Heidelberg :Computer Vision-ACCV 2007 Springer, 2007: 166-175.
[3]Lhuillier M, Quan L. A quasi-dense approach to surface reconstruction from uncalibrated images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(3): 418-433.
[4]Bradley D, Boubekeur T, Heidrich W. Accurate multi-view reconstruction using robust binocular stereo and surface meshing[C]//2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Anchorage, USA: IEEE, 2008: 1-8.
[5]Merrell P, Akbarzadeh A, Wang L, et al. Real-time visibility-based fusion of depth maps[C]//11th International Conference on Computer Vision. Rio de Janeiro, Brazil:IEEE, 2007: 1-8.
[6]Tylecek R, Sara R. Depth
{{custom_fnGroup.title_en}}
Footnotes
{{custom_fn.content}}