Appearance Modeling via Proxy-to-Image Alignment

ACM Transactions on Graphics 2017


 Hui Huang1*         Ke Xie1          Lin Ma        Dani Lischinski           Minglun Gong          Xin Tong            Daniel Cohen-Or1,5    

1Shenzhen University         2The Hebrew University of Jerusalem        3Memorial University of Newfoundland        4Microsoft Research Asia        5Tel-Aviv University



Fig. 1. Assisted by a rough 3D proxy, our approach can extract the geometric and photometric appearance of a fire hydrant from a single photo (left) and transfer it to a new target shape (R2-D2 from Star Wars).


Abstract 

Endowing 3D objects with realistic surface appearance is a challenging and time-demanding task, since real world surfaces typically exhibit a plethora of spatially variant geometric and photometric detail. Not surprisingly, computer artists commonly use images of real world objects as an inspiration and a reference for their digital creations. However, despite two decades of research on image-based modeling, there are still no tools available for automatically extracting the detailed appearance (micro-geometry and texture) of a 3D surface from a single image. In this paper, we present a novel user-assisted approach for quickly and easily extracting a non-parametric appearance model from a single photograph of a reference object.


The extraction process requires a user-provided proxy, whose geometry roughly approximates that of the object in the image. Since the proxy is just a rough approximation, it is necessary to align and deform it so as to match the reference object. The main contribution of this paper is a novel technique to perform such an alignment, which enables accurate joint recovery of geometric detail and reflectance. The correlations between the recovered geometry at various scales and the spatially varying reflectance constitute a non-parametric appearance model. Once extracted, the appearance model may then be applied to various 3D shapes, whose large scale geometry may differ considerably from that of the original reference object. Thus, our approach makes it possible to construct an appearance library, allowing users to easily enrich detail-less 3D shapes with realistic geometric detail and surface texture.


Fig. 2. Stress test for initial proxy placements, which reports the average alignment error for the visible proxy edges (on the right, using pixel units) for various initial proxy placements. The alignment error is computed with respect to manually defined ground truth edges, shown on the left. The qualitative results are shown in the middle with colors that correspond to the error plot lines. One failure case is shown at the second row, where our HMM correspondences are partially wrong due to the extreme deviation of the initial placement (black).

Fig. 3. Aligning an image object with a progressively finer set of 3D proxies (initial poses shown in green). The detected edge map and our computed edge-saliency potential field are shown in the second row of Figure 3, guiding the rigid alignment (shown in blue) and non-rigid deformation (shown in red) of proxies. Even the very coarse initial proxy on the left is successfully aligned. The alignment errors are plotted on the right, using colors corresponding to those of the disks next to each of the four proxies.


Fig. 4. Intrinsic decomposition results (top row: Z; bottom row: R) obtained using different approaches for an input photograph (a). Without using a proxy, the original SIRFS method outputs overly smooth Z and noisy R (b). More geometric details are recovered using our approach and a 3D proxy (b and c). Nevertheless, when the imprecise alignment obtained using [Kraevoy et al. 2009] (the green one in Figure 5(c), which is the best of the three) is used, artifacts show up along sharp edges in both Z and R (b). These artifacts are not present in our approach (d); see zoomed-in views for a better comparison.


Results


Fig. 5. Using single input photos and coarse 3D proxies (top), we construct a small library of appearance models for five different categories of materials. This allows users to easily add photorealistic details to target shapes and experiment with different appearances. While bumpiness is introduced to all target shapes, close inspection shows that the character of the bumps is quite different among the different materials. Note that most resulting shapes utilize at least two appearance models extracted from different photos for their different parts. The different parts, and their assigned appearance, are indicated by the user.


Fig. 6. Adjustment of displacement magnitude during appearance transfer. The stone appearance model extracted from Figure 11 is applied to differenttarget shapes with different displacement magnitude settings. The resulting surface detail can therefore be rougher (rooster) or smoother (snail). For each shape, four models are shown: the user-provided proxy Ptarget (top-left), the deformed model Pdeform (top-right), the final displaced geometry Df (Pdeform) (bottom-left), and the texture mapped result (bottom-right).


Fig. 7. A simple input scene (left) is enriched using appearance models extracted from photos of different materials (refer to Figure 14). From top to bottom,the four fish models have metal, wood, bread, and stone appearances applied, respectively. The base has fabric appearance applied.


Acknowledgments

We thank the anonymous reviewers for their valuable comments. This work was supported in part by NSFC (61522213, 6171101005), 973 Program (2015CB352501), Guangdong Science and Technology Program (2015A030312015), Natural Science Foundation of Shenzhen University (827-000196), NSERC (2017-06086), ISF (2366/16) and ISF-NSFC Joint Research Program (2472/17, 2217/15). Hui Huang (hhzhiyan@gmail.com) is the corresponding author of this paper.


Bibtex

@article{Huang17,
title = {Appearance Modeling via Proxy-to-Image Alignment},
author = {Hui Huang and Ke Xie and Lin Ma and Dani Lischinski and Minglun Gong and Xin Tong and Daniel Cohen-or},
journal = {ACM Transactions on Graphics},
volume = {36},
number = {6},
pages = {},  
year = {2017},
}

Downloads
Copyright © 2016-2018 Visual Computing Research Center