Abstract
Traditional subspace methods for face recognition compute a measure of similarity between images after projecting them onto a fixed linear subspace that is spanned by some principal component vectors (a.k.a. 'eigenfaces') of a training set of images. By supposing a parametric Gaussian distribution over the subspace and a symmetric Gaussian noise model for the image given a point in the subspace, we can endow this framework with a probabilistic interpretation so that Bayes-optimal decisions can be made. However, we expect that different image clusters (corresponding, say, to different poses and expressions) will be best represented by different subspaces. In this paper, we study the recognition performance of a mixture of local linear subspaces model that can be fit to training data using the expectation maximization algorithm. The mixture model outperforms a nearest-neighbor classifier that operates in a PCA subspace.
Original language | English (US) |
---|---|
Pages (from-to) | 32-37 |
Number of pages | 6 |
Journal | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
State | Published - 1998 |
Event | Proceedings of the 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Santa Barbara, CA, USA Duration: Jun 23 1998 → Jun 25 1998 |
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition