Sketching in Gestalt Space: Interactive Shape Abstraction through Perceptual Reasoning

Computer Graphics Forum 2018



Julian Kratt1             Till Niese1             Ruizhen Hu2             Hui Huang2             Soeren Pirk3             Andrei Sharf4             Daniel Cohen-Or5             Oliver Deussen1

1University of Konstanz          2Shenzhen University         3Stanford University            4Ben-Gurion University of the Negev        5Tel-Aviv University



Abstract

We present an interactive method that allows users to easily abstract complex 3D models with only a few strokes. The key idea is to employ well-known Gestalt principles to help generalizing user inputs into a full model abstraction while accounting for form, perceptual patterns and semantics of the model. Using these principles, we alleviate the user’s need to explicitly define shape abstractions. We utilize structural characteristics such as repetitions, regularity and similarity to transform user strokes into full 3D abstractions. As the user sketches over shape elements, we identify Gestalt groups and later abstract them to maintain their structural meaning. Unlike previous approaches, we operate directly on the geometric elements, in a sense applying Gestalt
principles in 3D. We demonstrate the effectiveness of our approach with a series of experiments, including a variety of complex models and two extensive user studies to evaluate our framework.



Figure 1: User-assisted abstraction of a Japanese house. The user sketches his intention on parts of the object. The system automatically finds Gestalt groups based on the loose scribbles and abstracts these groups accordingly. By automatically propagating abstractions to similar geometric parts, the whole model is abstracted (right).



Figure 2: Visibility affects Gestalt formation: given two Gestalt groups in 3D (left), one group may be occluded by the other under certain viewpoints and thus will not be visible as a group anymore (right). The surrounding cylinder is only rendered to provide a better spatial orientation.


Figure 3: Abstraction of the Japanese house with distinct sets of sketches. Using closed sketches or zig-zag lines (a) results in abstractions using embracing objects (b). Single strokes (c) instruct the system to use visual summarization (d). 



Figure 4: Abstraction of a 3D balcony model. The strength of the group simplification is based on the visibility, hence the higher the occlusion is the more significant the abstraction is.



Figure 5: Segmentation of our input models for further processing and two abstracted models printed in 3D.




Results

Figure 5: User-assisted abstraction (descriptions given in the text).


Figure 6: View-dependent abstraction of a city model. Each row shows the abstraction for a specific viewpoint on the scene. The camera position and orientation of each viewpoint is indicated by the red camera frustum. Based on each view, we compute our visibility terms, which are then used to guide our Gestalt-based optimization and to determine the amount of abstraction. A coloured-coded visualization of element visibility is shown in (b). Buildings that are visible are coloured in blue. The final view-dependent abstractions shown from above and from the perspective of each camera are illustrated in (b) and (c).


Acknowledgements
This work was financially supported by the grants of DFG (620/20–1), German-Israeli Foundation for Scientific Research and Development (I-1274-407.6), National Science Foundation of China (61761146002, 61522213 and 61602311), Guangdong Science and Technology Program (2015A030312015), Shenzhen Innovation Program (JCYJ20170302153208613 and JCYJ20151015151249564), the Israel Science Foundation (1106/11, 2472/17), the Max Planck Center for Visual Computing and Communication (MPC-VCC) funded by Stanford University and the Federal Ministry of Education and Research of the Federal Republic of Germany (FKZ–01IMC01 and FKZ–01IM10001).


Bibtex

@ARTICLE{3DGestalt18,
    title = {Sketching in Gestalt Space: Interactive Shape Abstraction through Perceptual Reasoning},
    author = {Julian Kratt and Till Niese and Ruizhen Hu and Hui Huang and Soeren Pirk and Andrei Sharf and Daniel Cohen-Or and Oliver Deussen},
    journal = {Computer Graphics Forum},
    volume = {37},
    issue = {6},
    pages = {188–204},
    year = {2018}
}

Downloads (faster for people in China)

Downloads (faster for people in other places)