Abstract
High quality speech-to-lips conversion, investigated in this work, renders realistic lips movement (video) consistent with input speech (audio) without knowing its linguistic content. Instead of memoryless frame-based conversion, we adopt maximum likelihood estimation of the visual parameter trajectories using an audio-visual joint Gaussian Mixture Model (GMM). We propose a minimum converted trajectory error approach (MCTE) to further refine the converted visual parameters. First, we reduce the conversion error by training the joint audio-visual GMM with weighted audio and visual likelihood. Then MCTE uses the generalized probabilistic descent algorithm to minimize a conversion error of the visual parameter trajectories defined on the optimal Gaussian kernel sequence according to the input speech. We demonstrate the effectiveness of the proposed methods using the LIPS 2009 Visual Speech Synthesis Challenge dataset, without knowing the linguistic (phonetic) content of the input speech.
Original language | English (US) |
---|---|
Pages | 1736-1739 |
Number of pages | 4 |
State | Published - Dec 1 2010 |
Event | 11th Annual Conference of the International Speech Communication Association: Spoken Language Processing for All, INTERSPEECH 2010 - Makuhari, Chiba, Japan Duration: Sep 26 2010 → Sep 30 2010 |
Other
Other | 11th Annual Conference of the International Speech Communication Association: Spoken Language Processing for All, INTERSPEECH 2010 |
---|---|
Country | Japan |
City | Makuhari, Chiba |
Period | 9/26/10 → 9/30/10 |
Keywords
- Minimum conversion error
- Minimum generation error
- Speech-to-lips conversion
- Visual speech synthesis
ASJC Scopus subject areas
- Language and Linguistics
- Speech and Hearing