Abstract
This paper presents a novel fused-hidden Markov model (fused-HMM) to integrate the audio and visual features of speech. In this model, audio and visual HMMs built individually are fused together using a general probabilistic fusion method, which is optimal in the maximum entropy sense. Specifically, the fusion method uses the dependencies between the audio hidden states and the visual observations to infer the dependencies between audio and video. The learning and inference algorithms described in this paper can handle audio and video features with different data rates and duration. In speaker verification experiments, the results show that the proposed method significantly reduces the recognition error rate as compared to unimodal HMMs and other simpler fusion methods.
Original language | English (US) |
---|---|
Title of host publication | IEEE International Conference on Multi-Media and Expo |
Pages | 1093-1096 |
Number of pages | 4 |
Edition | II/TUESDAY |
State | Published - Dec 1 2000 |
Event | 2000 IEEE International Conference on Multimedia and Expo (ICME 2000) - New York, NY, United States Duration: Jul 30 2000 → Aug 2 2000 |
Other
Other | 2000 IEEE International Conference on Multimedia and Expo (ICME 2000) |
---|---|
Country/Territory | United States |
City | New York, NY |
Period | 7/30/00 → 8/2/00 |
ASJC Scopus subject areas
- Engineering(all)