**ACM Tr****ansactions on Graphics ****(Proceedings of ****SIGGRAPH**** Asia 2015****)**

Shihao Wu^{1 }Hui Huang^{1* }Minglun Gong^{3 }Matthias Zwicker^{1 }Daniel Cohen-Or^{4}

^{1} University of Bern ^{2}Shenzhen VisuCA Key Lab / SIAT ^{3}Memorial University ^{4}Tel Aviv University

**Figure 1:** The deep points representation (left) is a set of line sections, each with one end (red) on the surface (middle) and the other (blue) on the meso-skeleton (right).

**Abstract **

**API and Data:**download from below.

**[To reference our software or data in a publication, please include the bibtex below and a link to this website.]**

**Video@****Youtub****e **

**Video@****Youku**** **

**Overview**

**Figure 2**: Deep points consolidation. Given the input point cloud (a) and its initial consolidation results (b), our approach creates deep points by sinking the inner points to form a meso-skeleton (c) and moving the outer points along the surface to complete missing areas (d). The final representation consists of a set of coherent vectors that connects the surface with the meso-skeleton.

**Results**

**Figure 3**: Autoscanning overview: given an incomplete point cloud (b) obtained by a blind scanning of an unknown object (a), we first reconstruct a Poisson iso-surface and estimate its confidence map (c), where high confidence areas are shown in red and low confidence in blue. A 3D viewing vector field (VVF) is then generated to determine a set of next-best-views (NBVs). A slice of the VVF is visualized in (d), where black arrows show the NBVs. Scanning from these NBVs captures more points (red in (e)) in low confidence areas. The scanning process is iterated until convergence to a high quality reconstruction (f).

**Figure 4**: The input point cloud (a) contains noise and large missing regions. Applying Poisson surface reconstruction [Kazhdan and Hoppe 2013] on either the input (a) or the WLOP consolidation [Huang et al. 2009] result (c) does not yield satisfactory models; see (b) and (d), respectively. The surface points shown in (e) are consolidated and completed by our dpoints technique. This leads to a much better Poisson surface reconstruction (f). In (c) and (e), the errors of the surface point normals estimated by local PCA are evaluated based on the ground truth and color coded (blue means higher error).

**Figure 5**: A comparison among the Poisson surface reconstructions [Kazhdan and Hoppe 2013] obtained using input points directly (a), ROSA skeleton [Tagliasacchi et al. 2009] (b), L1-medial skeleton [Huang et al. 2013b] (c), and our dpoints consolidation (d).

**Figure 6**: Results on standard benchmark 3D scans (a), which are downloaded from the SHREC 2015 dataset [NIST 2015]. The direct Poisson reconstruction results (b) incorrectly fused multiple parts together. Using the consolidated dpoints (c & d), the thin and adjacent structures are better preserved.

**Comparison **

**Figure 7**: Handling objects (a) with complicated thin and non-tubular structures. Directly applying Poisson reconstruction over WLOP (b) failed to provide satisfying results (d). Our reconstruction results (e) based on dpoints consolidation (c) better preserve the thin and non-tubular structures while maintaining the correct connectivity of different parts.

**Figure 8**: Comparison with the visibility-based algorithm [Khalfaoui et al. 2013] (a) and the PVS approach [Kriegel et al. 2013] (b) on the virtual model shown in Figure 5(a).

**Figure 9**: Reconstruction results under different confidence measures.

**Figure 10**: Post-processing for reconstructing fine geometry details and sharp features. While due to downsampling, the Poisson reconstruction results (d) on dpoints (c) cannot preserve fine details and sharp features as well as on the original shapes (a, b), the post EAR [Huang et al. 2013a] step (e) effectively helps to recover them (f) through inserting and projecting additional dpoints.

**Figure 11**: Quantitative evaluation on reconstruction accuracy using virtual scans of a ground truth synthetic model. When a single scan (a) is used, the direct Poisson reconstruction result (inset in (b)) does not resemble the model (shown in (b)). In comparison, the Poisson reconstructed model (inset in (d)) based on dpoints (c) is visually much more accurate. The reconstruction errors, measured using the distances between vertices on the ground truth model and their closest points on the reconstructed surface, are visualized in (b) and (d). The error distributions under clean and noise-corrupted scans are plotted in (e) and (f), respectively.

**Acknowledgments****BibTex**