TY - GEN
T1 - Efficient highly over-complete sparse coding using a mixture model
AU - Yang, Jianchao
AU - Yu, Kai
AU - Huang, Thomas
N1 - Copyright:
Copyright 2019 Elsevier B.V., All rights reserved.
PY - 2010
Y1 - 2010
N2 - Sparse coding of sensory data has recently attracted notable attention in research of learning useful features from the unlabeled data. Empirical studies show that mapping the data into a significantly higher-dimensional space with sparse coding can lead to superior classification performance. However, computationally it is challenging to learn a set of highly over-complete dictionary bases and to encode the test data with the learned bases. In this paper, we describe a mixture sparse coding model that can produce high-dimensional sparse representations very efficiently. Besides the computational advantage, the model effectively encourages data that are similar to each other to enjoy similar sparse representations. What's more, the proposed model can be regarded as an approximation to the recently proposed local coordinate coding (LCC), which states that sparse coding can approximately learn the nonlinear manifold of the sensory data in a locally linear manner. Therefore, the feature learned by the mixture sparse coding model works pretty well with linear classifiers. We apply the proposed model to PASCAL VOC 2007 and 2009 datasets for the classification task, both achieving state-of-the-art performances.
AB - Sparse coding of sensory data has recently attracted notable attention in research of learning useful features from the unlabeled data. Empirical studies show that mapping the data into a significantly higher-dimensional space with sparse coding can lead to superior classification performance. However, computationally it is challenging to learn a set of highly over-complete dictionary bases and to encode the test data with the learned bases. In this paper, we describe a mixture sparse coding model that can produce high-dimensional sparse representations very efficiently. Besides the computational advantage, the model effectively encourages data that are similar to each other to enjoy similar sparse representations. What's more, the proposed model can be regarded as an approximation to the recently proposed local coordinate coding (LCC), which states that sparse coding can approximately learn the nonlinear manifold of the sensory data in a locally linear manner. Therefore, the feature learned by the mixture sparse coding model works pretty well with linear classifiers. We apply the proposed model to PASCAL VOC 2007 and 2009 datasets for the classification task, both achieving state-of-the-art performances.
KW - PASCAL VOC challenge
KW - Sparse coding
KW - highly over-complete dictionary training
KW - image classi.cation
KW - mixture model
KW - mixture sparse coding
UR - http://www.scopus.com/inward/record.url?scp=78149308464&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=78149308464&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-15555-0_9
DO - 10.1007/978-3-642-15555-0_9
M3 - Conference contribution
AN - SCOPUS:78149308464
SN - 3642155545
SN - 9783642155543
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 113
EP - 126
BT - Computer Vision, ECCV 2010 - 11th European Conference on Computer Vision, Proceedings
PB - Springer
T2 - 11th European Conference on Computer Vision, ECCV 2010
Y2 - 10 September 2010 through 11 September 2010
ER -