Abstract

This paper presents a novel fused hidden Markov model (fused HMM) for integrating tightly coupled time series, such as audio and visual features of speech. In this model, the time series are first modeled by two conventional HMMs separately. The resulting HMMs are then fused together using a probabilistic fusion model, which is optimal according to the maximum entropy principle and a maximum mutual information criterion. Simulations and bimodal speaker verification experiments show that the proposed model can significantly reduce the recognition errors in noiseless or noisy environments.

Original languageEnglish (US)
Pages (from-to)573-581
Number of pages9
JournalIEEE Transactions on Signal Processing
Volume52
Issue number3
DOIs
StatePublished - Mar 1 2004

    Fingerprint

Keywords

  • Bimodal speech processing
  • Hidden Markov model
  • Information fusion

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Signal Processing

Cite this