Joint learning of visual attributes, object classes and visual saliency

Gang Wang, David Forsyth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a method to learn visual attributes (eg. "red", "metal", "spotted") and object classes (eg. "car", "dress", "umbrella") together. We assume images are labeled with category, but not location, of an instance. We estimate models with an iterative procedure: the current model is used to produce a saliency score, which, together with a homogeneity cue, identifies likely locations for the object (resp. attribute); then those locations are used to produce better models with multiple instance learning. Crucially, the object and attribute models must agree on the potential locations of an object. This means that the more accurate of the two models can guide the improvement of the less accurate model. Our method is evaluated on two data sets of images of real scenes, one in which the attribute is color and the other in which it is material. We show that our joint learning produces improved detectors. We demonstrate generalization by detecting attribute-object pairs which do not appear in our training data. The iteration gives significant improvement in performance.

Original languageEnglish (US)
Title of host publication2009 IEEE 12th International Conference on Computer Vision, ICCV 2009
Pages537-544
Number of pages8
DOIs
StatePublished - 2009
Event12th International Conference on Computer Vision, ICCV 2009 - Kyoto, Japan
Duration: Sep 29 2009Oct 2 2009

Publication series

NameProceedings of the IEEE International Conference on Computer Vision

Other

Other12th International Conference on Computer Vision, ICCV 2009
Country/TerritoryJapan
CityKyoto
Period9/29/0910/2/09

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Joint learning of visual attributes, object classes and visual saliency'. Together they form a unique fingerprint.

Cite this