Relaxed Collaborative Representation for Pattern Classification

Meng Yang, Lei Zhang, David Zhang, Shenlong Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Regularized linear representation learning has led to interesting results in image classification, while how the object should be represented is a critical issue to be investigated. Considering the fact that the different features in a sample should contribute differently to the pattern representation and classification, in this paper we present a novel relaxed collaborative representation (RCR) model to effectively exploit the similarity and distinctiveness of features. In RCR, each feature vector is coded on its associated dictionary to allow flexibility of feature coding, while the variance of coding vectors is minimized to address the similarity among features. In addition, the distinctiveness of different features is exploited by weighting its distance to other features in the coding domain. The proposed RCR is simple, while our extensive experimental results on benchmark image databases (e.g., various face and flower databases) show that it is very competitive with state-of-the-art image classification methods.

Original languageEnglish (US)
Title of host publication2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012
Number of pages8
StatePublished - 2012
Externally publishedYes
Event2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012 - Providence, RI, United States
Duration: Jun 16 2012Jun 21 2012

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919


Other2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012
Country/TerritoryUnited States
CityProvidence, RI

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Relaxed Collaborative Representation for Pattern Classification'. Together they form a unique fingerprint.

Cite this