Object image retrieval by exploiting online knowledge resources

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We describe a method to retrieve images found on web pages with specified object class labels, using an analysis of text around the image and of image appearance. Our method determines whether an object is both described in text and appears in a image using a discriminative image model and a generative text model. Our models are learnt by exploiting established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for image). These resources provide rich text and object appearance information. We describe results on two data sets. The first is Berg's collection of ten animal categories; on this data set, we outperform previous approaches [7, 33]. We have also collected five more categories. Experimental results show the effectiveness of our approach on this new data set.

Original languageEnglish (US)
Title of host publication26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
DOIs
StatePublished - Sep 23 2008
Event26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR - Anchorage, AK, United States
Duration: Jun 23 2008Jun 28 2008

Publication series

Name26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR

Other

Other26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
CountryUnited States
CityAnchorage, AK
Period6/23/086/28/08

Fingerprint

Image retrieval
Labels
Websites
Animals

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Control and Systems Engineering

Cite this

Wang, G., & Forsyth, D. A. (2008). Object image retrieval by exploiting online knowledge resources. In 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR [4587818] (26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR). https://doi.org/10.1109/CVPR.2008.4587818

Object image retrieval by exploiting online knowledge resources. / Wang, Gang; Forsyth, David Alexander.

26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR. 2008. 4587818 (26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wang, G & Forsyth, DA 2008, Object image retrieval by exploiting online knowledge resources. in 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR., 4587818, 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Anchorage, AK, United States, 6/23/08. https://doi.org/10.1109/CVPR.2008.4587818
Wang G, Forsyth DA. Object image retrieval by exploiting online knowledge resources. In 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR. 2008. 4587818. (26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR). https://doi.org/10.1109/CVPR.2008.4587818
Wang, Gang ; Forsyth, David Alexander. / Object image retrieval by exploiting online knowledge resources. 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR. 2008. (26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR).
@inproceedings{efabbdffc7cd4ca8a90b86e55af23d81,
title = "Object image retrieval by exploiting online knowledge resources",
abstract = "We describe a method to retrieve images found on web pages with specified object class labels, using an analysis of text around the image and of image appearance. Our method determines whether an object is both described in text and appears in a image using a discriminative image model and a generative text model. Our models are learnt by exploiting established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for image). These resources provide rich text and object appearance information. We describe results on two data sets. The first is Berg's collection of ten animal categories; on this data set, we outperform previous approaches [7, 33]. We have also collected five more categories. Experimental results show the effectiveness of our approach on this new data set.",
author = "Gang Wang and Forsyth, {David Alexander}",
year = "2008",
month = "9",
day = "23",
doi = "10.1109/CVPR.2008.4587818",
language = "English (US)",
isbn = "9781424422432",
series = "26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR",
booktitle = "26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR",

}

TY - GEN

T1 - Object image retrieval by exploiting online knowledge resources

AU - Wang, Gang

AU - Forsyth, David Alexander

PY - 2008/9/23

Y1 - 2008/9/23

N2 - We describe a method to retrieve images found on web pages with specified object class labels, using an analysis of text around the image and of image appearance. Our method determines whether an object is both described in text and appears in a image using a discriminative image model and a generative text model. Our models are learnt by exploiting established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for image). These resources provide rich text and object appearance information. We describe results on two data sets. The first is Berg's collection of ten animal categories; on this data set, we outperform previous approaches [7, 33]. We have also collected five more categories. Experimental results show the effectiveness of our approach on this new data set.

AB - We describe a method to retrieve images found on web pages with specified object class labels, using an analysis of text around the image and of image appearance. Our method determines whether an object is both described in text and appears in a image using a discriminative image model and a generative text model. Our models are learnt by exploiting established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for image). These resources provide rich text and object appearance information. We describe results on two data sets. The first is Berg's collection of ten animal categories; on this data set, we outperform previous approaches [7, 33]. We have also collected five more categories. Experimental results show the effectiveness of our approach on this new data set.

UR - http://www.scopus.com/inward/record.url?scp=51949099260&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=51949099260&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2008.4587818

DO - 10.1109/CVPR.2008.4587818

M3 - Conference contribution

AN - SCOPUS:51949099260

SN - 9781424422432

T3 - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR

BT - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR

ER -