TY - JOUR
T1 - Complete 3D Scene Parsing from an RGBD Image
AU - Zou, Chuhang
AU - Guo, Ruiqi
AU - Li, Zhizhong
AU - Hoiem, Derek
N1 - Funding Information:
Acknowledgements This research is supported in part by ONR MURI Grant N000141010934 and ONR MURI Grant N000141612007. We thank David Forsyth for insightful comments and discussion and Saurabh Singh, Kevin Shih and Tanmay Gupta for their comments on an earlier version of the manuscript.
Publisher Copyright:
© 2018, Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2019/2/15
Y1 - 2019/2/15
N2 - One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of orthogonal walls and the extent of objects, modeled with CAD-like 3D shapes. We parse both the visible and occluded portions of the scene and all observable objects, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the well-known challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scenes. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and spatial consistency. We use support inference to aid interpretation and propose a retrieval scheme that uses convolutional neural networks to classify regions and retrieve objects with similar shapes. We demonstrate the performance of our method on our newly annotated NYUd v2 dataset (Silberman et al., in: Computer vision-ECCV, 2012, pp 746–760, 2012) with detailed 3D shapes.
AB - One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of orthogonal walls and the extent of objects, modeled with CAD-like 3D shapes. We parse both the visible and occluded portions of the scene and all observable objects, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the well-known challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scenes. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and spatial consistency. We use support inference to aid interpretation and propose a retrieval scheme that uses convolutional neural networks to classify regions and retrieve objects with similar shapes. We demonstrate the performance of our method on our newly annotated NYUd v2 dataset (Silberman et al., in: Computer vision-ECCV, 2012, pp 746–760, 2012) with detailed 3D shapes.
KW - 3D parsing
KW - Single image reconstruction
KW - Visual scene understanding
UR - http://www.scopus.com/inward/record.url?scp=85057089423&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057089423&partnerID=8YFLogxK
U2 - 10.1007/s11263-018-1133-z
DO - 10.1007/s11263-018-1133-z
M3 - Article
AN - SCOPUS:85057089423
SN - 0920-5691
VL - 127
SP - 143
EP - 162
JO - International Journal of Computer Vision
JF - International Journal of Computer Vision
IS - 2
ER -