Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories

Svetlana Lazebnik, Cordelia Schmid, Jean Ponce

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting "spatial pyramid" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba's "gist" and Lowe's SIFT descriptors.

Original languageEnglish (US)
Title of host publicationProceedings - 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006
Pages2169-2178
Number of pages10
DOIs
StatePublished - 2006
Event2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006 - New York, NY, United States
Duration: Jun 17 2006Jun 22 2006

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume2
ISSN (Print)1063-6919

Other

Other2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006
Country/TerritoryUnited States
CityNew York, NY
Period6/17/066/22/06

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories'. Together they form a unique fingerprint.

Cite this