Learning the sparse representation for classification

Jianchao Yang, Jiangping Wang, Thomas Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution


In this work, we propose a novel supervised matrix factorization method used directly as a multi-class classifier. The coefficient matrix of the factorization is enforced to be sparse by 1-norm regularization. The basis matrix is composed of atom dictionaries from different classes, which are trained in a jointly supervised manner by penalizing inhomogeneous representations given the labeled data samples. The learned basis matrix models the data of interest as a union of discriminative linear subspaces by sparse projection. The proposed model is based on the observation that many high-dimensional natural signals lie in a much lower dimensional subspaces or union of subspaces. Experiments conducted on several datasets show the effectiveness of such a representation model for classification, which also suggests that a tight reconstructive representation model could be very useful for discriminant analysis.

Original languageEnglish (US)
Title of host publicationElectronic Proceedings of the 2011 IEEE International Conference on Multimedia and Expo, ICME 2011
StatePublished - Nov 7 2011
Event2011 12th IEEE International Conference on Multimedia and Expo, ICME 2011 - Barcelona, Spain
Duration: Jul 11 2011Jul 15 2011

Publication series

NameProceedings - IEEE International Conference on Multimedia and Expo
ISSN (Print)1945-7871
ISSN (Electronic)1945-788X


Other2011 12th IEEE International Conference on Multimedia and Expo, ICME 2011


  • Sparse representation
  • dictionary training
  • digit recognition
  • face recognition
  • matrix factorization
  • sparse coding

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Computer Science Applications


Dive into the research topics of 'Learning the sparse representation for classification'. Together they form a unique fingerprint.

Cite this