A tale of two classifiers: SNoW vs. SVM in visual recognition

Ming Hsuan Yang, Dan Roth, Narendra Ahuja

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Numerous statistical learning methods have been developed for visual recognition tasks. Few attempts, however, have been made to address theoretical issues, and in particular, study the suitability of different learning algorithms for visual recognition. Large margin classifiers, such as SNoW and SVM, have recently demonstrated their success in object detection and recognition. In this paper, we present a theoretical account of these two learning approaches, and their suitability to visual recognition. Using tools from computational learning theory, we show that the main difference between the generalization bounds of SVM and SNoW depends on the properties of the data. We argue that learning problems in the visual domain have sparseness characteristics and exhibit them by analyzing data taken from face detection experiments. Experimental results exhibit good generalization and robustness properties of the SNoW-based method, and conform to the theoretical analysis.

Original languageEnglish (US)
Title of host publicationComputer Vision - ECCV 2002 - 7th European Conference on Computer Vision, Proceedings
EditorsAnders Heyden, Gunnar Sparr, Mads Nielsen, Peter Johansen
Number of pages15
ISBN (Electronic)9783540437482
StatePublished - 2002
Event7th European Conference on Computer Vision, ECCV 2002 - Copenhagen, Denmark
Duration: May 28 2002May 31 2002

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Other7th European Conference on Computer Vision, ECCV 2002

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)


Dive into the research topics of 'A tale of two classifiers: SNoW vs. SVM in visual recognition'. Together they form a unique fingerprint.

Cite this