Global-to-Local Generative Model for 3D Shapes

ACM Transactions on Graphics (Proceedings of SIGGRAPH ASIA)

Hao Wang1*          Nadav Schor2*          Ruizhen Hu1          Haibin Huang3          Daniel Cohen-Or1,2        Hui Huang1†

1Shenzhen University           2Tel Aviv University         3Megvii / Face++ Research

 *Joint first authors        †Corresponding author



Fig. 1. Given a collection of 3D semantically segmented chairs, we train a network to generate new chairs from the same distribution. The 1024 generated chairs are encoded using an auto-encoder and embedded into 2D using MDS with the Euclidean distance in the latent space. The five colors of the displayed embedded points are associated with clusters of the training data. For each cluster, representative chairs are shown in groups with a similar color of the background. We can see rich variations in shape geometry.


Abstract

We introduce a generative model for 3D man-made shapes. The presented method takes a global-to-local (G2L) approach. An adversarial network (GAN) is built first to construct the overall structure of the shape, segmented and labeled into parts. A novel conditional auto-encoder (AE) is then augmented to act as a part-level refiner  The GAN, associated with additional local discriminators and quality losses, synthesizes a voxel-based model, and assigns the voxels with part labels that are represented in separate channels.
The AE is trained to amend the initial synthesis of the parts, yielding more plausible part geometries. We also introduce new means to measure and evaluate the performance of an adversarial generative model. We demonstrate that our global-to-local generative model produces significantly better results than a plain three-dimensional GAN, in terms of both their shape variety and the distribution with respect to the training data.




Fig. 2. An overview of our generative model. It consists of two parts: a global-to-local GAN (left) that synthesizes a 323 model and a part refiner (right) that enhances the synthesized parts of the model, by refining and completing missing regions, and increasing the resolution to 643.


Fig. 4. A gallery of our generated Chairs, Airplanes, Lamps, and Tables shown above, with their 3-nearest-neighbors retrieved from the training set.



Fig. 5. PR Improvement. For each category, we present four examples of the improvement achieved by the PR. Shapes generated by G2LGAN are shown on the top row, and their PR enhanced versions are provided underneath for a clear comparison.




Fig. 9. Interpolating between two pairs of chairs at the left and right, respectively. Shapes generated by G2LGAN are shown on the top row, and their PR enhanced versions are provided underneath. The PR results stay sharp and clear throughout the different stages, where usual latent space interpolation results, such as G2LGAN, have some artifacts and noise. This results in a clear-cut transition between the different stages in the PR interpolation, which can be seen from the arm-rests of the left example and the legs in the right example.



Fig. 10. Part-wise surface assembly. We present two examples for each category. On the left column is our generated shape, in the middle columns are the part-wise nearest neighbors from the training set. On the right column is our reconstructed mesh that fits the generated shape.


ACKNOWLEDGEMENTS
We thank the anonymous reviewers for their valuable comments. This work was supported in parts by NSFC (61522213, 61761146002, 61861130365, 61602311), 973 Program (2015CB352501), GD Science and Technology Program (2015A030312015), Shenzhen Innovation Program (KQJSCX20170727101233642, JCYJ20170302153208613) and ISF-NSFC Joint Research (2472/17).


Bibtex

@article{G2L18,

title = {Global-to-Local Generative Model for 3D Shapes},
author = {Hao Wang and Nadav Schor and Ruizhen Hu and Haibin Huang and Daniel Cohen-Or and Hui Huang},
journal = {ACM Transactions on Graphics (Proc. SIGGRAPH ASIA)},
volume = {37},
number = {6},
pages = {214:1—214:10},  
year = {2018},


Downloads