Support surface prediction in indoor scenes

Ruiqi Guo, Derek Hoiem

Research output: Chapter in Book/Report/Conference proceedingConference contribution


In this paper, we present an approach to predict the extent and height of supporting surfaces such as tables, chairs, and cabinet tops from a single RGBD image. We define support surfaces to be horizontal, planar surfaces that can physically support objects and humans. Given a RGBD image, our goal is to localize the height and full extent of such surfaces in 3D space. To achieve this, we created a labeling tool and annotated 1449 images with rich, complete 3D scene models in NYU dataset. We extract ground truth from the annotated dataset and developed a pipeline for predicting floor space, walls, the height and full extent of support surfaces. Finally we match the predicted extent with annotated scenes in training scenes and transfer the the support surface configuration from training scenes. We evaluate the proposed approach in our dataset and demonstrate its effectiveness in understanding scenes in 3D space.

Original languageEnglish (US)
Title of host publicationProceedings - 2013 IEEE International Conference on Computer Vision, ICCV 2013
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages8
ISBN (Print)9781479928392
StatePublished - 2013
Externally publishedYes
Event2013 14th IEEE International Conference on Computer Vision, ICCV 2013 - Sydney, NSW, Australia
Duration: Dec 1 2013Dec 8 2013

Publication series

NameProceedings of the IEEE International Conference on Computer Vision


Other2013 14th IEEE International Conference on Computer Vision, ICCV 2013
CitySydney, NSW


  • RGBD
  • image parsing
  • scene understanding
  • support surfaces

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Support surface prediction in indoor scenes'. Together they form a unique fingerprint.

Cite this