A multi-view embedding space for modeling internet images, tags, and their semantics

Yunchao Gong, Qifa Ke, Michael Isard, Svetlana Lazebnik

Research output: Contribution to journalArticlepeer-review

Abstract

This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.

Original languageEnglish (US)
Pages (from-to)210-233
Number of pages24
JournalInternational Journal of Computer Vision
Volume106
Issue number2
DOIs
StatePublished - Jan 2014

Keywords

  • Canonical correlation
  • Image search
  • Internet images
  • Tags

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A multi-view embedding space for modeling internet images, tags, and their semantics'. Together they form a unique fingerprint.

Cite this