Space-Time Co-Segmentation of Articulated Point Cloud Sequences

Computer Graphics Forum (Proceedings of EUROGRAPHICS 2016)

Qing Yuan1        Guiqing Li1        Kai Xu3;4        Xudong Chen1        Hui Huang2;3* 

1South China University of Technology        2Shenzhen University        3Shenzhen VisuCA Key Lab / SIAT        4National University of Defense Technology


Figure 1: Space-time co-segmentation for the Pink Panther dataset: motion-based segmentation of individual frames is shown in the top line, and the co-segmentation result is depicted at the bottom line.

Abstract

Consistent segmentation is to the center of many applications based on dynamic geometric data. Directly segmenting a raw 3D point cloud sequence is a challenging task due to the low data quality and large inter-frame variation across the whole sequence. We propose a local-to-global approach to co-segment point cloud sequences of articulated objects into near-rigid moving parts. Our method starts from a per-frame point clustering, derived from a robust voting-based trajectory analysis. The local segments are then progressively propagated to the neighboring frames with a cut propagation operation, and further merged through all frames using a novel space-time segment grouping technqiue, leading to a globally consistent and compact segmentation of the entire articulated point cloud sequence. Such progressive propagating and merging, in both space and time dimensions, makes our co-segmentation algorithm especially robust in handling noise, occlusions and pose/view variations that are usually associated with raw scan data.

Overview


Figure 2: Pipeline of our space-time co-segmentation: given an input sequence of point clouds, our method starts by performing a local per-frame segmentation through motion-based point clustering (left). The local segmentation are then mutually propagated between every two adjacent frames, obtaining a number of sub-sequences formed by the segments with near-rigid motion (middle). Finally, we conduct a space-time grouping over the sub-sequences to form a globally consistent and compact segmentation over the whole 4D sequence (right).

Results


Figure 3: Space-time co-segmentation of a clenching hand over the motion data shown as the overlapped scan sequence to the left. Top row: per-frame motion-based segmentation. Bottom row: our co-segmentation result.

Acknowledgments

We would like to thank all the reviewers for their valuable comments and constructive suggestions. This work was supported in part by NSFC (61572202, 61522213, 61572507), 973 Program (2015CB352501, 2014CB360503), Guangdong Science and Technology Program (2015A030312015, 2014B050502009, 2014TX01X033, S2013020012795), Shenzhen Innovation Program (JCYJ20151015151249564, CXB201104220029A) and Research Funds for the Central Universities (F020501).


Bibtex

@article{Coseg16,
     title = {Space-Time Co-Segmentation of Articulated Point Cloud Sequences},
     author = {Qing Yuan, Guiqing Li, Kai Xu, Xudong Chen, Hui Huang},
     journal = {Computer Graphics Forum},
     volume = {35},
     number = {2},
     year = {2016},
     pages = {419--429},   
}


Copyright © 2016-2018 Visual Computing Research Center