Modeling and recognition of landmark image collections using iconic scene graphs

Rahul Raguram, Changchang Wu, Jan Michael Frahm, Svetlana Lazebnik

Research output: Contribution to journalArticlepeer-review

Abstract

This article presents an approach for modeling landmarks based on large-scale, heavily contaminated image collections gathered from the Internet. Our system efficiently combines 2D appearance and 3D geometric constraints to extract scene summaries and construct 3D models. In the first stage of processing, images are clustered based on low-dimensional global appearance descriptors, and the clusters are refined using 3D geometric constraints. Each valid cluster is represented by a single iconic view, and the geometric relationships between iconic views are captured by an iconic scene graph. Using structure from motion techniques, the system then registers the iconic images to efficiently produce 3D models of the different aspects of the landmark. To improve coverage of the scene, these 3D models are subsequently extended using additional, non-iconic views. We also demonstrate the use of iconic images for recognition and browsing. Our experimental results demonstrate the ability to process datasets containing up to 46,000 images in less than 20 hours, using a single commodity PC equipped with a graphics card. This is a significant advance towards Internet-scale operation.

Original languageEnglish (US)
Pages (from-to)213-239
Number of pages27
JournalInternational Journal of Computer Vision
Volume95
Issue number3
DOIs
StatePublished - Dec 2011
Externally publishedYes

Keywords

  • Image clustering
  • Landmark recognition
  • Landmark reconstruction
  • Location recognition
  • Photo collection reconstruction
  • Structure from motion

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Modeling and recognition of landmark image collections using iconic scene graphs'. Together they form a unique fingerprint.

Cite this