Mobility-Trees for Indoor Scenes Manipulation

Computer Graphics Forum 2014

Andrei Sharf1,2    Hui Huang2*    Cheng Liang2    Jiapei Zhang2   Baoquan Chen2,3    Minglun Gong2,4  

1Ben Gurion University    2Shenzhen VisuCA/SIAT    3Shandong University    4Memorial University   


Figure 1: In a complex indoor scene (left), our method detects functional mobilities of furniture object and parts (zoom-ins) allowing easy manipulation and reorganization (right).


In this work we introduce the mobility-tree construct for high-level functional representation of complex 3D indoor scenes. In recent years, digital indoor scenes are becoming increasingly popular, consisting of detailed geometry and complex functionalities. These scenes often consist of objects that reoccur in various poses and interrelate with each other. In this work we analyze the reoccurrence of objects in the scene and automatically detect their functional mobilities. Mobility analysis denotes the motion capabilities (i.e. degree of freedom) of an object and its subpart which typically relates to their indoor functionalities. We compute an object's mobility by analyzing its spatial arrangement, repetitions and relations with other objects and store it in a mobility-tree. Repetitive motions in the scenes are grouped in mobility-groups, for which we develop a set of sophisticated controllers facilitating semantical high-level editing operations. We show applications of our mobility analysis to interactive scene manipulation and reorganization, and present results for a variety of indoor scenes.


Figure 2: Extracting mobilities from repeating chairs near a table. We detect rotational mobility of chairs around the table (left), translational mobility of armrests and chair legs (mid-left), rotational mobility of wheels and chair backs (mid-right). The regularized scene using our mobility controllers is on the right.

Figure 3: Algorithm overview on a 3D cabinet scene. Left-to-right: input scene, segmentation into drawers and body, support tree consisting of one supporting node belonging to cabinet body and supported drawers, detected mobilities defined by translational axes and limits.

Figure 4: We compute mobilities in complex indoor scenes (left column). Using computed controllers (green arrows), we can simultaneously align and regularize objects, allowing easy manipulation of the scenes (right column).

Figure 5: Three scene models reconstructed from raw scans (left). We detect mobilities (middle) and reorganize the scenes using our algorithm (right).

Figure 6: Mobility extraction from repeating cars in a parking lot (top row). Middle, bottom rows show simultaneous mobility editing of car parts.

Figure 7: Complex rotational mobilities are progressively extracted from repetitions of a robot-arm.


The authors would like to thank all the reviewers for their valuable comments. This work was partially supported by grants from NSFC (61103166, 61232011, 61025012), CAS Young Scientists (2013Y1GA0007), Guangdong Science and Technology Program (2011B050200007), Shenzhen Innovation Program (CXB201104220029A, KQCX20120807104901791, JCYJ20130401170306810, ZD201111080115A, KC2012JSJS0019A), Israel Science Foundation (ISF) and European IRG FP7.


title = {Mobility-Trees for Indoor Scenes Manipulation},
author = {A. Sharf and H. Huang and C. Liang and J. Zhang and B. Chen and M. Gong},
journal = {Computer Graphics Forums},
volume = {33},
issue = {1},
pages = {2-14},
year = {2014},

Copyright © 2016-2018 Visual Computing Research Center