Semi-supervised training of Gaussian mixture models by conditional entropy minimization

Research output: Contribution to conferencePaper

Abstract

In this paper, we propose a new semi-supervised training method for Gaussian Mixture Models. We add a conditional entropy minimizer to the maximum mutual information criteria, which enables to incorporate unlabeled data in a discriminative training fashion. The training method is simple but surprisingly effective. The preconditioned conjugate gradient method provides a reasonable convergence rate for parameter update. The phonetic classification experiments on the TIMIT corpus demonstrate significant improvements due to unlabeled data via our training criteria.

Original languageEnglish (US)
Pages1353-1356
Number of pages4
StatePublished - Dec 1 2010
Event11th Annual Conference of the International Speech Communication Association: Spoken Language Processing for All, INTERSPEECH 2010 - Makuhari, Chiba, Japan
Duration: Sep 26 2010Sep 30 2010

Other

Other11th Annual Conference of the International Speech Communication Association: Spoken Language Processing for All, INTERSPEECH 2010
CountryJapan
CityMakuhari, Chiba
Period9/26/109/30/10

Keywords

  • Conditional entropy
  • Gaussian Mixture Models
  • Phonetic classification
  • Semi-supervised learning

ASJC Scopus subject areas

  • Language and Linguistics
  • Speech and Hearing

Fingerprint Dive into the research topics of 'Semi-supervised training of Gaussian mixture models by conditional entropy minimization'. Together they form a unique fingerprint.

  • Cite this

    Huang, J. T., & Hasegawa-Johnson, M. A. (2010). Semi-supervised training of Gaussian mixture models by conditional entropy minimization. 1353-1356. Paper presented at 11th Annual Conference of the International Speech Communication Association: Spoken Language Processing for All, INTERSPEECH 2010, Makuhari, Chiba, Japan.