Boosting Gaussian mixture models via discriminant analysis

Hao Tang, Thomas S Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The Gaussian mixture model (GMM) can approximate arbitrary probability distributions, which makes it a powerful tool for feature representation and classification. However, it suffers from insufficient training data, especially when the feature space is of high dimensionality. In this paper, we present a novel approach to boost the GMMs via discriminant analysis in which the required amount of training data depends only upon the number of classes, regardless of the feature dimension. We demonstrate the effectiveness of the proposed BoostGMM-DA classifier by applying it to the problem of emotion recognition in speech. Our experiment results indicate that significantly higher recognition rates are achieved by the BoostGMM-DA classifier than are achieved by the conventional GMM minimum error rate (MER) classifier under the same training conditions, and that significantly less training data are required for the BoostGMM-DA classifier to yield comparable recognition rates to the GMM MER classifier.

Original languageEnglish (US)
Title of host publication2008 19th International Conference on Pattern Recognition, ICPR 2008
StatePublished - Dec 1 2008
Event2008 19th International Conference on Pattern Recognition, ICPR 2008 - Tampa, FL, United States
Duration: Dec 8 2008Dec 11 2008

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651

Other

Other2008 19th International Conference on Pattern Recognition, ICPR 2008
CountryUnited States
CityTampa, FL
Period12/8/0812/11/08

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint Dive into the research topics of 'Boosting Gaussian mixture models via discriminant analysis'. Together they form a unique fingerprint.

Cite this