Learning to Predict Part Mobility from a Single Static Snapshot


 ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2017) 

Ruizhen Hu        Wenchao Li1         Oliver Van Kaick2         Ariel Shamir3         Hao Zhang4         Hui Huang1* 

Shenzhen University1            Carleton University      The Interdisciplinary Center Herzliya3        Simon Fraser University4

Figure 1: We introduce a data-driven approach for learning a part mobility model, which enables an understanding of part motions in a 3D object based only on a single static snapshot of the object. The learning is based on a training set of mobility units of different motion types, M1, M2, ..., as in (a). Each unit is represented by multiple snapshots over its motion sequence, along with associated motion parameters. The part mobility model (b) is composed of the start and end snapshots of each unit and a static(snapshot)-to-dynamic(unit) (S-D) mapping function learned from training data. Given a query 3D shape, shown at the bottom of (b), we find the closest mobility unit from the training set via the S-D mapping (b). Aside from motion prediction, the unit also provides a means to transfer its motion to the query shape, as shown in (c)-left. In (c)-right, we show mobility prediction and transfer on five different parts of a static scooter model, along with the units found via S-D mapping.



We introduce a method for learning a model for the mobility of parts in 3D objects. Our method allows not only to understand the dynamic functionalities of one or more parts in a 3D object, but also to apply the mobility functions to static 3D models. Specifically, the learned part mobility model can predict mobilities for parts of a 3D object given in the form of a single static snapshot reflecting the spatial configuration of the object parts in 3D space, and transfer the mobility from relevant units in the training data. The training data consists of a set of mobility units of different motion types. Each unit is composed of a pair of 3D object parts (one moving and one reference part), along with usage examples consisting of a few snapshots capturing different motion states of the unit. Taking advantage of a linearity characteristic exhibited by most part motions in everyday objects, and utilizing a set of part-relation descriptors, we define a mapping from static snapshots to dynamic units. This mapping employs a motion-dependent snapshot-to-unit distance obtained via metric learning. We show that our learning scheme leads to accurate motion prediction from single static snapshots and allows proper motion transfer. We also demonstrate other applications such as motion-driven object detection and motion hierarchy construction.




Figure 2: Comparing mobility unit prediction by using our part mobility model obtained with metric learning (“Ours'') to geometry-based retrieval using the LFD descriptor (left) and to our part mobility model but with uniform weights in the snapshot-to-unit distance (right).


Figure 3: Examples of motion transfer obtained with our method after predicting the correct motion type for each snapshot. The transformation axes (for rotation or translation) are denoted with the dashes lines, while the motion is indicated by the arrows.


Figure 4: Examples of motion prediction for all the parts of different shapes, and the corresponding motion hierarchies for the shapes. The query shapes are shown in the middle of the various groups, and the units in the training data closest to different pairs of parts are indicated by numbers. The colors of nodes in the hierarchy indicate the correspondence to the shape parts.


Download and Reference  

 We will realse our source code and data soon...

To reference our algorithm, code or data in a publication, please include the bibtex below and a link to this website.


     title = {Learning to Predict Part Mobility from a Single Static Snapshot},
     author = {Ruizhen Hu and Wenchao Li and Oliver van Kaick and Ariel Shamir and Hao Zhang and Hui Huang},
     journal = {ACM Transactions on Graphics (Proc. SIGGRAPH Asia)},
     volume = {36},
     number = {6},
     year = {2017},
     pages = {227:1-13},   

Copyright © 2016-2018 Visual Computing Research Center