P2P-NET: Bidirectional Point Displacement Net for Shape Transform

ACM Transactions on Graphics (Proceedings of SIGGRAPH 2018)


Kangxue Yin1           Hui Huang2,*           Daniel Cohen-Or2,3           Hao Zhang1

1Simon Fraser University            2Shenzhen University           3Tel Aviv University


Fig. 1. We develop a general-purpose deep neural network which learns geometric transformations between point sets, e.g., from cross-sectional profiles to 3D shapes, as shown. User can edit the profiles to create an interpolating sequence (top) and our network transforms all of them into point-based 3D shapes.


Abstract 

We introduce P2P-NET, a general-purpose deep neural network which learns geometric transformations between point-based shape representations from two domains, e.g., meso-skeletons and surfaces, partial and complete scans, etc. The architecture of the P2P-NET is that of a bi-directional point displacement network, which transforms a source point set to a target point set with the same cardinality, and vice versa, by applying point-wise displacement vectors learned from data. P2P-NET is trained on paired shapes from the source and target domains, but without relying on point-to-point correspondences between the source and target point sets. The training loss combines two uni-directional geometric losses, each enforcing a shape-wise similarity between the predicted and the target point sets, and a cross-regularization term to encourage consistency between displacement vectors going in opposite directions.We develop and present several different applications enabled by our general-purpose bidirectional P2P-NET to highlight the effectiveness, versatility, and potential of our network in solving a variety of point-based shape transformation problems.


Fig. 2. Network architecture of our bidirectional P2P-NET.


Fig. 3. Ablation study with a toy example (cat & dog). The transformations were learned from a dataset synthesized by randomly rotating and scaling a pair of 2D point sets in the shapes of dog and cat, respectively. The top row shows the transformations from dog (blue) to cat (red), and the bottom row shows from cat (blue) to dog (red). For a clear visualization, we randomly plot only 20% of the displacement vectors using black lines.


(a) Transformations between meso-skeleton and surface samples. The 2nd and 4th rows show transformations from meso-skeleton (left) to surface samples (right).
The 1st and 3rd rows show transformations from surface samples (right) to meso-skeleton (left).


(b) Transformations between meso-skeleton and point scan. The 2nd and 4th rows show transformations from meso-skeleton (left) to point scan (right). The 1st
and 3rd rows show transformations from point scan (right) to meso-skeleton (left).


(c) Transformations between point scan and surface samples. The 2nd and 4th rows show transformations from point scan (left) to surface samples (right). 

The1st and 3rd rows show transformations from surface samples (right) to point scan (left).

Fig. 4. A gallery of point set transformations among meso-skeletons, shape surfaces, and single-view point scans via our network P2P-NET. Note that, to obtain the transformed surface point samples, we feed the same input eight times to the network and integrate the network outputs to produce a dense point set.


Fig. 5. Visualization of vectors (grey lines) depicting point-wise displacements learned by P2P-NET for various domain mappings, where the source point sets are rendered in orange. Note that for ease of visualization, only 30% of the vectors are displayed and we do not show the target point sets.


Fig. 8. Transformations from 2D cross-sectional profiles (a) to 3D object surfaces (b) and (c). In addition to ground-truths (d), we also provide the closest 2D cross-sectional profiles (e) retrieved from the training set, and their corresponding surface point sets (f).



Data & Code

To reference our ALGORITHM, CODE, DATA or RESULTS in any publication, Please include the bibtex below.

Link: https://github.com/kangxue/P2P-NET 



ACKNOWLEDGMENTS

The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported in part by NSFC (61522213, 61761146002, 6171101466), 973 Program (2015CB352501), Guangdong Science Program (2015A030312015), Shenzhen Innovation Program (KQJSCX20170727101233642, JCYJ20151015151249564), ISF-NSFC Joint Research Program (2217/15, 2472/17), Israel Science Foundation (2366/16) and NSERC (611370).


Bibtex

@article{P2P18,
title = {P2P-NET: Bidirectional Point Displacement Net for Shape Transform},
author = {Kangxue Yin and Hui Huang and Daniel Cohen-Or and Hao Zhang},
journal = {ACM Transactions on Graphics (Proc. SIGGRAPH)},
volume = {37},
number = {4},
pages = {152:1--152:13},  
year = {2018},
}

Downloads