Quality-driven Poisson-guided Autoscanning


ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia). 2014.

Shihao Wu1        Wei Sun1        Pinxin Long1      Hui Huang1*      Daniel Cohen-Or2      Minglun Gong3       Oliver Deussen4         Baoquan Chen5   

1Shenzhen VisuCA Key Lab / SIAT        2Tel Aviv University       3Memorial University        4University of Konstanz       5Shandong University   



Figure 1: Our robot-based, Poisson-guided autoscanner can progressively, adaptively, and fully automatically generate complete, high quality, and high fidelity scan models.

Abstract

We present a quality-driven, Poisson-guided autonomous scanning method. Unlike previous scan planning techniques, we do not aim to minimize the number of scans needed to cover the object’s surface, but rather to ensure the high quality scanning of the model. This goal is achieved by placing the scanner at strategically selected Next-Best-Views (NBVs) to ensure progressively capturing the geometric details of the object, until both completeness and high fidelity are reached. The technique is based on the analysis of a Poisson field and its geometric relation with an input scan. We generate a confidence map that reflects the quality/fidelity of the estimated Poisson iso-surface. The confidence map guides the generation of a viewing vector field, which is then used for computing a set of NBVs. We applied the algorithm on two different robotic platforms, a PR2 mobile robot and a one-arm industry robot. We demonstrated the advantages of our method through a number of autonomous high quality scannings of complex physical objects, as well as performance comparisons against state-of-the-art methods.

Overview


 Figure 2: Autoscanning overview: given an incomplete point cloud (b) obtained by a blind scanning of an unknown object (a), we first reconstruct a Poisson iso-surface and estimate its confidence map (c), where high confidence areas are shown in red and low confidence in blue. A 3D viewing vector field (VVF) is then generated to determine a set of next-best-views (NBVs). A slice of the VVF is visualized in (d), where black arrows show the NBVs. Scanning from these NBVs captures more points (red in (e)) in low confidence areas. The scanning process is iterated until convergence to a high quality reconstruction (f).

Video

youtube

Results

Figure 3: Autoscanning overview: given an incomplete point cloud (b) obtained by a blind scanning of an unknown object (a), we first reconstruct a Poisson iso-surface and estimate its confidence map (c), where high confidence areas are shown in red and low confidence in blue. A 3D viewing vector field (VVF) is then generated to determine a set of next-best-views (NBVs). A slice of the VVF is visualized in (d), where black arrows show the NBVs. Scanning from these NBVs captures more points (red in (e)) in low confidence areas. The scanning process is iterated until convergence to a high quality reconstruction (f).

Figure 4: For an unknown digital model, its reconstruction is progressively and rapidly enriched as more virtual scans are performed. At each scan iteration, multiple NBVs (indicated by green arrows) are computed and used for positioning the virtual scanner. The top row shows the obtained point clouds, where the red ones are points acquired during the current scan iteration. The middle row presents the Poisson reconstructed models; and the bottom row demonstrates the magnitude of the VVFs along a given slicing plane.

Figure 5: An elephant model being scanned by a one-arm industry robot. The trunk is completely missing in the initial blind scan. Through iteratively scanning from the automatically selected NBVs, the final model captures all geometric details.

Figure 6: Simulating an outdoor scene scanning scenario using a scaled church model (a). The areas with missing data in the initial blind scans (b) are gradually covered in following scan iterations (e.g., c-d). The final reconstruction (e) is both complete and detail-preserving.
Comparison   

Figure 7: Comparison with the visibility-based algorithm [Khalfaoui et al. 2013] (a) and the PVS approach [Kriegel et al. 2013] (b) on the virtual model shown in Figure 5(a).

Figure 8: Comparison with the visibility-based algorithm [Khalfaoui et al. 2013] (a) and the PVS approach [Kriegel et al. 2013] (b) on the virtual model shown in Figure 5(a).

Figure 9: Reconstruction results under different confidence measures.

Figure 10: Comparison between two models obtained using the same number of scans. Scanning from the first 36 NBVs adaptively selected for the input object (a) results in a complete model (b). The raised weapon and the feet are missing from the model (c) obtained by scanning from 36 directions that are selected by a sphere-based NBV approach [V ´ asquez-G ´ omez et al. 2009].


Figure 11: Comparison between models generated by manual scans and our autonomous scanner.

Acknowledgments
The authors would like to thank all the reviewers for their valuable comments and feedback. This work is supported in part by grants from NSFC (61232011, 61103166, 61379091), 973 Program (2014CB360503), 863 Program (2012AA011801), Shenzhen Technology Innovation Program
(CXB201104220029A, KQCX20120807104901791, ZD201111080115A, JCYJ20130401170306810, JSGG20130624154940238), NSERC and Israel Science Foundation.
     
BibTex
@ARTICLE{,
  title = {Quality-driven Poisson-guided autoscanning},
  author = {Shihao Wu, Wei Sun, Pinxin Long, Hui Huang*, Daniel Cohen-Or, Minglun Gong, Oliver Deussen, Baoquan Chen},
  journal = {ACM Trans. On Graphics (Proc. SIGGRAPH ASIA 2014)},
  volume = {33},
  issue = {6},
  pages = {203:1–203:12},
  year = {2014}
}
Copyright © 2016-2018 Visual Computing Research Center