Consistent Two-Flow Network for Tele-Registration of Point Clouds

IEEE Transactions on Visualization and Computer Graphics 2021


Zihao Yan1    Zimu Yi1    Ruizhen Hu1*    Niloy J. Mitra2    Daniel Cohen-Or1,3    Hui Huang1

1Shenzhen University    2University College London    3Tel Aviv University





Fig. 1: CTF-Net registers pairs of partial scans with little or no overlap. The network is designed to encourage the registration and completion network branches to mutually cooperate to be consistent, and thereby regularizes both the (global) registration and completion problems. Here, we show the result of the combined registration and completion of two partial scans, with little overlap, of a real scan of a toy airplane.


Abstract

Rigid registration of partial observations is a fundamental problem in various applied fields. In computer graphics, special attention has been given to the registration between two partial point clouds generated by scanning devices. State-of-the-art registration techniques still struggle when the overlap region between the two point clouds is small, and completely fail if there is no overlap between the scan pairs. In this paper, we present a learning-based technique that alleviates this problem, and allows registration between point clouds, presented in arbitrary poses, and having little or even no overlap, a setting that has been referred to as tele-registration. Our technique is based on a novel neural network design that learns a prior of a class of shapes and can complete a partial shape. The key idea is combining the registration and completion tasks in a way that reinforces each other. In particular, we simultaneously train the registration network and completion network using two coupled flows, one that register-and-complete, and one that complete-and-register, and encourage the two flows to produce a consistent result. We show that, compared with each separate flow, this two-flow training leads to robust and reliable tele-registration, and hence to a better point cloud prediction that completes the registered scans. It is also worth mentioning that each of the components in our neural network outperforms state-of-the-art methods in both completion and registration. We further analyze our network with several ablation studies and demonstrate its performance on a large number of partial point clouds, both synthetic and real-world, that have only small or no overlap.


Fig. 2: The architecture of CTF-Net. Given a pair of partial scans, CTF-Net simultaneously predicts the transformation parameters for registration and coordinates of points for completion. The prediction follows a mirrored manner, which performs register-and-complete(R-C) in one flow and complete-and-register(C-R) in another. The R-C flow and C-R flow are denoted by green and purple lines respectively. These two flows mutually reinforce each other by enhancing consistency on their outputs, which is represented by gray lines.



Fig. 5: The prediction results of CTF-Net. The first two columns show the input pairs (colored in blue and orange). The third and fourth columns show the registration results from C-R flow, where the corresponding original mesh is shown in transparent gray to better display the relative position of the registered parts. The fifth and sixth columns show the completion results from R-C flow, and the last two columns are the ground truth point clouds.



Fig. 6: The prediction results on real scans. The first column shows the photo of each real object. The next two columns show the paired partial inputs. The fourth and fifth columns are the registered results, which take each part as an anchor. The fifth and sixth columns show the completion results. The complete fused shape from a much denser RGB-D sequences are shown in the last column for comparison.



Data&Code

Note that the DATA and CODE are free for Research and Education Use ONLY. 

Please cite our paper (add the bibtex below) if you use any part of our ALGORITHM, CODE, DATA or RESULTS in any publication.

Linkļ¼šhttps://github.com/Salingo/CTF-Net



Acknowledgements

This work was supported in part by NSFC (U2001206, 61872250), GD Talent Program (2019JC05X328), GD Natural Science Foundation (2021B1515020085, 2020A0505100064), DEGP Key Project (2018KZDXM058), Shenzhen Science and Technology Program (RCJC20200714114435012), Royal Society (NAF-R1-180099), and Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ).



Bibtex

@article{CTFNet21,

title={Consistent Two-Flow Network for Tele-Registration of Point Clouds},

author={Zihao Yan and Zimu Yi and Ruizhen Hu and Niloy J. Mitra and Daniel Cohen-Or and Hui Huang},

journal={IEEE Transactions on Visualization and Computer Graphics},

volume={},

pages={},

year={2021},

}



Downloads (faster for people in China)

Downloads (faster for people in other places)