
ACM Transactions on Graphics 2018
Hui Huang1* Ke Xie1 Lin Ma1 Dani Lischinski2 Minglun Gong3 Xin Tong4 Daniel Cohen-Or1,5
1Shenzhen University 2The Hebrew University of Jerusalem 3Memorial University of Newfoundland 4Microsoft Research Asia 5Tel-Aviv University
Abstract
Endowing 3D objects with realistic surface appearance is a challenging and time-demanding task, since real world surfaces typically exhibit a plethora of spatially variant geometric and photometric detail. Not surprisingly, computer artists commonly use images of real world objects as an inspiration and a reference for their digital creations. However, despite two decades of research on image-based modeling, there are still no tools available for automatically extracting the detailed appearance (micro-geometry and texture) of a 3D surface from a single image. In this paper, we present a novel user-assisted approach for quickly and easily extracting a non-parametric appearance model from a single photograph of a reference object.
Fig. 3. Aligning an image object with a progressively finer set of 3D proxies (initial poses shown in green). The detected edge map and our computed edge-saliency potential field are shown in the second row of Figure 3, guiding the rigid alignment (shown in blue) and non-rigid deformation (shown in red) of proxies. Even the very coarse initial proxy on the left is successfully aligned. The alignment errors are plotted on the right, using colors corresponding to those of the disks next to each of the four proxies.
Fig. 4. Intrinsic decomposition results (top row: Z; bottom row: R) obtained using different approaches for an input photograph (a). Without using a proxy, the original SIRFS method outputs overly smooth Z and noisy R (b). More geometric details are recovered using our approach and a 3D proxy (b and c). Nevertheless, when the imprecise alignment obtained using [Kraevoy et al. 2009] (the green one in Figure 5(c), which is the best of the three) is used, artifacts show up along sharp edges in both Z and R (b). These artifacts are not present in our approach (d); see zoomed-in views for a better comparison.
Fig. 5. Using single input photos and coarse 3D proxies (top), we construct a small library of appearance models for five different categories of materials. This allows users to easily add photorealistic details to target shapes and experiment with different appearances. While bumpiness is introduced to all target shapes, close inspection shows that the character of the bumps is quite different among the different materials. Note that most resulting shapes utilize at least two appearance models extracted from different photos for their different parts. The different parts, and their assigned appearance, are indicated by the user.
Fig. 6. Adjustment of displacement magnitude during appearance transfer. The stone appearance model extracted from Figure 11 is applied to differenttarget shapes with different displacement magnitude settings. The resulting surface detail can therefore be rougher (rooster) or smoother (snail). For each shape, four models are shown: the user-provided proxy Ptarget (top-left), the deformed model Pdeform (top-right), the final displaced geometry Df (Pdeform) (bottom-left), and the texture mapped result (bottom-right).
Fig. 7. A simple input scene (left) is enriched using appearance models extracted from photos of different materials (refer to Figure 14). From top to bottom,the four fish models have metal, wood, bread, and stone appearances applied, respectively. The base has fabric appearance applied.
Acknowledgements
We thank the anonymous reviewers for their valuable comments. This work was supported in part by the NSFC (61522213, 61761146002), 973 Program (2015CB352501), Guangdong Science and Technology Program (2015A030312015), Natural Science Foundation of Shenzhen University (827-000196), NSERC (2017- 06086), ISF (2366/16), and ISF-NSFC Joint Research Program (2472/17, 2217/15).
Bibtex
@article{AppMod18,
title = {Appearance Modeling via Proxy-to-Image Alignment},
author = {Hui Huang and Ke Xie and Lin Ma and Dani Lischinski and Minglun Gong and Xin Tong and Daniel Cohen-or},
journal = {ACM Transactions on Graphics},
volume = {37},
number = {1},
pages = {10:1--10:15},
year = {2018},
}