An empirical bayes approach to contextual region classification

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents a nonparametric approach to labeling of local image regions that is inspired by recent developments in information-theoretic denoising. The chief novelty of this approach rests in its ability to derive an unsupervised contextual prior over image classes from unlabeled test data. Labeled training data is needed only to learn a local appearance model for image patches (although additional supervisory information can optionally be incorporated when it is available). Instead of assuming a parametric prior such as a Markov random field for the class labels, the proposed approach uses the empirical Bayes technique of statistical inversion to recover a contextual model directly from the test data, either as a spatially varying or as a globally constant prior distribution over the classes in the image. Results on two challenging datasets convincingly demonstrate that useful contextual information can indeed be learned from unlabeled data.

Original languageEnglish (US)
Title of host publication2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009
PublisherIEEE Computer Society
Pages2380-2387
Number of pages8
ISBN (Print)9781424439935
DOIs
StatePublished - 2009
Externally publishedYes
Event2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009 - Miami, FL, United States
Duration: Jun 20 2009Jun 25 2009

Publication series

Name2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009

Other

Other2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009
Country/TerritoryUnited States
CityMiami, FL
Period6/20/096/25/09

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Biomedical Engineering

Fingerprint

Dive into the research topics of 'An empirical bayes approach to contextual region classification'. Together they form a unique fingerprint.

Cite this