Co-Locating Style-Defining Elements on 3D Shapes

ACM Transactions on Graphics 2017

Ruizhen Hu1     Wenchao Li2     Oliver Van Kaick    Hui Huang1,2*     Melinos Averkiou4    Daniel Cohen-Or2,5     Hao Zhang6

Shenzhen University1       SIAT      Carleton University3       University of Cyprus 4     Tel Aviv University5     Simon Fraser University6


Figure 1: These pieces of furniture have known coherent styles. Can we analyze their geometry and extract locatable style elements which define the different style groups?


Abstract 

We introduce a method for co-locating style-defining elements over a set of 3D shapes. Our goal is to translate high-level style descriptions, such as “Ming” or “European” for furniture models, into explicit and localized regions over the geometric models that characterize each style. For each style, the set of style-defining elements is defined as the union of all the elements that are able to discriminate the style. Another property of the style-defining elements is that they are frequently-occurring, reflecting shape characteristics that appear across multiple shapes of the same style. Given an input set of 3D shapes spanning multiple categories and styles, where the shapes are grouped according to their style labels, we perform a cross-category co-analysis of the shape set to learn and spatially locate a set of defining elements for each style. This is accomplished by first sampling a large number of candidate geometric elements, and then iteratively applying feature selection to the candidates, to extract style-discriminating elements until no additional elements can be found. Thus, for each style label, we obtain sets of discriminative elements that together form the superset of defining elements for the style. We demonstrate that the co-location of style-defining elements allows us to address tasks such as style classification, and enables a variety of applications such as style-revealing view selection, style-aware sampling, and style-driven modeling for 3D shapes.


Figure 2: A sample of a set of shapes given as input to our style co-analysis. The style groups have the labels on top and are rendered in different colors. Note that our method is applicable to a wide variety of shape categories and styles beyond furniture.


Overview

Figure 3: Overview of our method for the discovery of style-defining elements. (a) We collect a set of initial elements from regions of the shapes, shown here as points in a 2D embedding. The distances between points reflect their similarity in terms of the features that describe them. Each element has a style label (point color). No clear clusters are present. (b) We sample candidate elements (indicated by the crosses at the circle centers) with an analysis of density, and learn a similarity measure for each element based on its nearest neighbors (points inside the circles). The insets show the geometric patches corresponding to the elements in the embedding. (c) We combine the candidate elements to discover sets of style-defining elements, e.g., E1 + E2 + E3 define the “red” style.


Results

Figure 4: Examples of style-defining elements selected by our method. We show one element per style. Note how the elements capture distinctive characteristics of each style.

Figure 5: Style-aware sampling. For each shape, we show our style-revealing scalar field on the left and the sampling on the right.


Acknowledgments

We thank the reviewers for their comments and suggestions. This work was supported in part by NSFC (61602311, 61522213, 61528208), 973 Program (2015CB352501), Guangdong Science and Technology Program (2014TX01X033, 2015A030312015, 2016A050503036), Shenzhen Innovation Program (JCYJ20151015151249564), Natural Science Foundation of SZU (827-000196) and NSERC (611370, 2015-05407).


Bibtex

@article{Style17,
title = {Co-Locating Style-Defining Elements on 3D Shapes},
author = {Ruizhen Hu and Wenchao Li and Oliver Van Kaick and Hui Huang and Melinos Averkiou and Daniel Cohen-Or and Hao Zhang},
journal = { ACM Transactions on Graphics},
volume = {36},
number = {3},
pages = {33:1--15},  
year = {2017},
}
Downloads
Copyright © 2016-2018 Visual Computing Research Center