TY - GEN
T1 - Explanation-based object recognition
AU - Levine, Geoffrey
AU - DeJong, Gerald
PY - 2008
Y1 - 2008
N2 - Many of today's visual scene and object categorization systems learn to classify using a statistical profile over a large number of small-scale local features sampled from the image. While some application systems have been constructed, this technology has enjoyed far more success in the research setting. The approach is best suited to tasks where within-class variability is small compared to between-class variability. This condition holds for large diverse artificial collections such as CalTech 101 where most categories have little to do with each other, but it often does not hold among naturalistic application-driven categories. Here, category distinctions are more likely to be conceptual or functional, and within-class differences can rival or exceed between-class differences. In this paper, we show how the local feature approach can be extended using explanation-based learning (EBL). The EBL approach makes use of readily available prior domain knowledge assembled into plausible explanations for why a training example's observable features might merit its assigned training label. Explanations expose additional semantic features and suggest how those hidden features may be estimated from observable features. We exhibit our approach on two CalTech 101 dataset tasks that we argue are emblematic of applied domains: Ketch vs. Schooner and Airplane vs. Background. In both cases classification accuracy is significantly improved.
AB - Many of today's visual scene and object categorization systems learn to classify using a statistical profile over a large number of small-scale local features sampled from the image. While some application systems have been constructed, this technology has enjoyed far more success in the research setting. The approach is best suited to tasks where within-class variability is small compared to between-class variability. This condition holds for large diverse artificial collections such as CalTech 101 where most categories have little to do with each other, but it often does not hold among naturalistic application-driven categories. Here, category distinctions are more likely to be conceptual or functional, and within-class differences can rival or exceed between-class differences. In this paper, we show how the local feature approach can be extended using explanation-based learning (EBL). The EBL approach makes use of readily available prior domain knowledge assembled into plausible explanations for why a training example's observable features might merit its assigned training label. Explanations expose additional semantic features and suggest how those hidden features may be estimated from observable features. We exhibit our approach on two CalTech 101 dataset tasks that we argue are emblematic of applied domains: Ketch vs. Schooner and Airplane vs. Background. In both cases classification accuracy is significantly improved.
UR - http://www.scopus.com/inward/record.url?scp=50949099567&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=50949099567&partnerID=8YFLogxK
U2 - 10.1109/WACV.2008.4544019
DO - 10.1109/WACV.2008.4544019
M3 - Conference contribution
AN - SCOPUS:50949099567
SN - 1424419131
SN - 9781424419135
T3 - 2008 IEEE Workshop on Applications of Computer Vision, WACV
BT - 2008 IEEE Workshop on Applications of Computer Vision, WACV
T2 - 2008 IEEE Workshop on Applications of Computer Vision, WACV
Y2 - 7 January 2008 through 9 January 2008
ER -