Attribute discovery via predictable discriminative binary codes

Mohammad Rastegari, Ali Farhadi, David Forsyth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present images with binary codes in a way that balances discrimination and learnability of the codes. In our method, each image claims its own code in a way that maintains discrimination while being predictable from visual data. Category memberships are usually good proxies for visual similarity but should not be enforced as a hard constraint. Our method learns codes that maximize separability of categories unless there is strong visual evidence against it. Simple linear SVMs can achieve state-of-the-art results with our short codes. In fact, our method produces state-of-the-art results on Caltech256 with only 128-dimensional bit vectors and outperforms state of the art by using longer codes. We also evaluate our method on ImageNet and show that our method outperforms state-of-the-art binary code methods on this large scale dataset. Lastly, our codes can discover a discriminative set of attributes.

Original languageEnglish (US)
Title of host publicationComputer Vision, ECCV 2012 - 12th European Conference on Computer Vision, Proceedings
Pages876-889
Number of pages14
EditionPART 6
DOIs
StatePublished - Oct 30 2012
Event12th European Conference on Computer Vision, ECCV 2012 - Florence, Italy
Duration: Oct 7 2012Oct 13 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 6
Volume7577 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other12th European Conference on Computer Vision, ECCV 2012
CountryItaly
CityFlorence
Period10/7/1210/13/12

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Attribute discovery via predictable discriminative binary codes'. Together they form a unique fingerprint.

Cite this