FSM-based pronunciation modeling using articulatory phonological code

Chi Hu, Xiaodan Zhuang, Mark Allan Hasegawa-Johnson

Research output: Contribution to conferencePaper


According to articulatory phonology, the gestural score is an invariant speech representation. Though the timing schemes, i.e., the onsets and offsets, of the gestural activations may vary, the ensemble of these activations tends to remain unchanged, informing the speech content. In this work, we propose a pronunciation modeling method that uses a finite state machine (FSM) to represent the invariance of a gestural score. Given the "canonical" gestural score (CGS) of a word with a known activation timing scheme, the plausible activation onsets and offsets are recursively generated and encoded as a weighted FSM. An empirical measure is used to prune out gestural activation timing schemes that deviate too much from the CGS. Speech recognition is achieved by matching the recovered gestural activations to the FSM-encoded gestural scores of different speech contents. We carry out pilot word classification experiments using synthesized data from one speaker. The proposed pronunciation modeling achieves over 90% accuracy for a vocabulary of 139 words with no training observations, outperforming direct use of the CGS.


  • Articulatory phonology
  • Finite state machine
  • Speech gesture
  • Speech production

ASJC Scopus subject areas

  • Language and Linguistics
  • Speech and Hearing

Cite this

Hu, C., Zhuang, X., & Hasegawa-Johnson, M. A. (2010). FSM-based pronunciation modeling using articulatory phonological code. 2274-2277. Paper presented at 11th Annual Conference of the International Speech Communication Association: Spoken Language Processing for All, INTERSPEECH 2010, Makuhari, Chiba, Japan.