Spectral regression for efficient regularized subspace learning

Deng Cai, Xiaofei He, Jiawei Han

Research output: Contribution to conferencePaper

Abstract

Subspace learning based face recognition methods have attracted considerable interests in recent years, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projection (LPP), Neighborhood Preserving Embedding (NPE) and Marginal Fisher Analysis (MFA). However, a disadvantage of all these approaches is that their computations involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we propose a novel dimensionality reduction framework, called Spectral Regression (SR), for efficient regularized subspace learning. SR casts the problem of learning the projective functions into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizers can be naturally incorporated into our algorithm which makes it more flexible. Computational analysis shows that SR has only linear-time complexity which is a huge speed up comparing to the cubic-time complexity of the ordinary approaches. Experimental results on face recognition demonstrate the effectiveness and efficiency of our method.

Original languageEnglish (US)
DOIs
StatePublished - Dec 1 2007
Event2007 IEEE 11th International Conference on Computer Vision, ICCV - Rio de Janeiro, Brazil
Duration: Oct 14 2007Oct 21 2007

Other

Other2007 IEEE 11th International Conference on Computer Vision, ICCV
CountryBrazil
CityRio de Janeiro
Period10/14/0710/21/07

Fingerprint

Face recognition
Decomposition
Discriminant analysis
Principal component analysis
Data storage equipment

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Cite this

Cai, D., He, X., & Han, J. (2007). Spectral regression for efficient regularized subspace learning. Paper presented at 2007 IEEE 11th International Conference on Computer Vision, ICCV, Rio de Janeiro, Brazil. https://doi.org/10.1109/ICCV.2007.4408855

Spectral regression for efficient regularized subspace learning. / Cai, Deng; He, Xiaofei; Han, Jiawei.

2007. Paper presented at 2007 IEEE 11th International Conference on Computer Vision, ICCV, Rio de Janeiro, Brazil.

Research output: Contribution to conferencePaper

Cai, D, He, X & Han, J 2007, 'Spectral regression for efficient regularized subspace learning', Paper presented at 2007 IEEE 11th International Conference on Computer Vision, ICCV, Rio de Janeiro, Brazil, 10/14/07 - 10/21/07. https://doi.org/10.1109/ICCV.2007.4408855
Cai D, He X, Han J. Spectral regression for efficient regularized subspace learning. 2007. Paper presented at 2007 IEEE 11th International Conference on Computer Vision, ICCV, Rio de Janeiro, Brazil. https://doi.org/10.1109/ICCV.2007.4408855
Cai, Deng ; He, Xiaofei ; Han, Jiawei. / Spectral regression for efficient regularized subspace learning. Paper presented at 2007 IEEE 11th International Conference on Computer Vision, ICCV, Rio de Janeiro, Brazil.
@conference{05613de166eb4cf7ac2a2a97462ed09b,
title = "Spectral regression for efficient regularized subspace learning",
abstract = "Subspace learning based face recognition methods have attracted considerable interests in recent years, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projection (LPP), Neighborhood Preserving Embedding (NPE) and Marginal Fisher Analysis (MFA). However, a disadvantage of all these approaches is that their computations involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we propose a novel dimensionality reduction framework, called Spectral Regression (SR), for efficient regularized subspace learning. SR casts the problem of learning the projective functions into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizers can be naturally incorporated into our algorithm which makes it more flexible. Computational analysis shows that SR has only linear-time complexity which is a huge speed up comparing to the cubic-time complexity of the ordinary approaches. Experimental results on face recognition demonstrate the effectiveness and efficiency of our method.",
author = "Deng Cai and Xiaofei He and Jiawei Han",
year = "2007",
month = "12",
day = "1",
doi = "10.1109/ICCV.2007.4408855",
language = "English (US)",
note = "2007 IEEE 11th International Conference on Computer Vision, ICCV ; Conference date: 14-10-2007 Through 21-10-2007",

}

TY - CONF

T1 - Spectral regression for efficient regularized subspace learning

AU - Cai, Deng

AU - He, Xiaofei

AU - Han, Jiawei

PY - 2007/12/1

Y1 - 2007/12/1

N2 - Subspace learning based face recognition methods have attracted considerable interests in recent years, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projection (LPP), Neighborhood Preserving Embedding (NPE) and Marginal Fisher Analysis (MFA). However, a disadvantage of all these approaches is that their computations involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we propose a novel dimensionality reduction framework, called Spectral Regression (SR), for efficient regularized subspace learning. SR casts the problem of learning the projective functions into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizers can be naturally incorporated into our algorithm which makes it more flexible. Computational analysis shows that SR has only linear-time complexity which is a huge speed up comparing to the cubic-time complexity of the ordinary approaches. Experimental results on face recognition demonstrate the effectiveness and efficiency of our method.

AB - Subspace learning based face recognition methods have attracted considerable interests in recent years, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projection (LPP), Neighborhood Preserving Embedding (NPE) and Marginal Fisher Analysis (MFA). However, a disadvantage of all these approaches is that their computations involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we propose a novel dimensionality reduction framework, called Spectral Regression (SR), for efficient regularized subspace learning. SR casts the problem of learning the projective functions into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizers can be naturally incorporated into our algorithm which makes it more flexible. Computational analysis shows that SR has only linear-time complexity which is a huge speed up comparing to the cubic-time complexity of the ordinary approaches. Experimental results on face recognition demonstrate the effectiveness and efficiency of our method.

UR - http://www.scopus.com/inward/record.url?scp=50649123949&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=50649123949&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2007.4408855

DO - 10.1109/ICCV.2007.4408855

M3 - Paper

AN - SCOPUS:50649123949

ER -