Interaction Context (ICON): Towards a Geometric Functionality Descriptor

ACM Transactions on Graphics (Proceedings of SIGGRAPH 2015)

Ruizhen Hu1;2;3        Chenyang Zhu1       Oliver van Kaick4       Ligang Liu5       Ariel Shamir6       Hao Zhang1

1Simon Fraser University       2SIAT       3Zhejiang University       4Carleton University         5USTC         6The Interdisciplinary Center

Figure 1: Similarity between shapes (top) vs. similarity between functionalities (bottom). A shape descriptor (LFD) considers the middle cart more similar to the desk, as shown on the left using a 2D MDS projection of the distances between objects. Our contextual descriptor, interaction context or ICON, takes into account objectto-object interactions and identifies the two carts as more similar.


Abstract

We introduce a contextual descriptor which aims to provide a geometric description of the functionality of a 3D object in the context of a given scene. Differently from previous works, we do not regard functionality as an abstract label or represent it implicitly through an agent. Our descriptor, called interaction context or ICON for short, explicitly represents the geometry of object-to-object interactions. Our approach to object functionality analysis is based on the key premise that functionality should mainly be derived from interactions between objects and not objects in isolation. Specifically, ICON collects geometric and structural features to encode interactions between a central object in a 3D scene and its surrounding objects. These interactions are then grouped based on feature similarity, leading to a hierarchical structure. By focusing on interactions and their organization, ICON is insensitive to the numbers of objects that appear in a scene, the specific disposition of objects around the central object, or the objects’ fine-grained geometry. With a series of experiments, we demonstrate the potential of ICON in functionality-oriented shape processing, including shape retrieval (either directly or by complementing existing shape descriptors), segmentation, and synthesis.


Download and Reference  

Welcome to download our source code and data from below.

To reference our algorithm, code or data in a publication, please include the bibtex below and a link to this website.


Overview

Figure 2: Overview of construction and matching of ICONs. Given an input scene with the central object (orange table) in (a), we detect interactions between the central object and other objects. The interacting objects are shown with bright colors in (b), while non-interacting objects (the apple and banana) are shown in gray. Next, we group the interactions into a hierarchical structure to obtain the ICON descriptor shown in (c). Each leaf node corresponds to an interaction and has the same color as the object in (b) that gives rise to the interaction, while internal nodes group similar interactions. (d) shows the descriptor of the scene in (e). The two ICON descriptors in (c) and (d) are matched by finding a common subtree isomorphism. We obtain the intuitive correspondence between objects on the tables and chairs, shown by the matched portions of the hierarchies selected by the dashed contours. Note that the floor and extra objects in (e) do not have a match.


Figure 3: ICON is data-dependent: different interactions lead to different descriptors, i.e., hierarchies. We show two input scenes where different types of interactions take place with the same central object (orange table). Note how the corresponding hierarchies have a different structure in each case.

Results

Figure 4: Examples of retrieval on our dataset with ICON and other descriptors. In each row, the query is to the left, while the top-5 results are on the right. The central object is colored in orange. Note in the last four rows how combining ICON and LFD improves the accuracy of the results for the desk, and the results for the dining table are more accurate for ICON than for IBSH.

Figure 5: Segmentation of interacting regions and shapes that support multiple interactions (hybrids). Given the input scenes in (a), we obtain the segmentations of the central objects in (b). Next, we match the parts that support similar interactions (shown in blue) and transfer the other regions (green) from each shape on the left in (b) to each shape on the right in (b), to obtain the hybrids in (c). Note how the hybrids resemble the real-world designs shown in the red boxes.


Acknowledgments


We sincerely thank the reviewers for their comments, suggestions,and the tremendous effort they put in during iterations of the paper in the revision phase, leading to its final functional form. Thanks also go to Hadar Averbuch-Elor for being the voice in the video and for her careful proofread of the paper. This work was supported in part by grants from NSERC (No. 611370), GRAND NCE, NSFC (61232011, 61222206), National 973 Program (2014CB360503), and Shenzhen Key Lab (CXB201104220029A).



BibTex

@ ARTICLE {ICON15,
     title = {Interaction Context (ICON): Towards a Geometric Functionality Descriptor},
     author = {Ruizhen Hu and Chenyang Zhu and Oliver van Kaick and Ligang Liu and Ariel Shamir and Hao Zhang},
     journal = {ACM Transactions on Graphics (Proc. SIGGRAPH)},
     volume = {34},
     number = {4},
     pages = {83:1--83:12},  
     year = {2015},
 }
Copyright © 2016-2018 Visual Computing Research Center