Acoustic landmarks contain more information about the phone string than other frames for automatic speech recognition with deep neural network acoustic model

Di He, Boon Pang Lim, Xuesong Yang, Mark Hasegawa-Johnson, Deming Chen

Research output: Contribution to journalArticle

Abstract

Most mainstream automatic speech recognition (ASR) systems consider all feature frames equally important. However, acoustic landmark theory is based on a contradictory idea that some frames are more important than others. Acoustic landmark theory exploits quantal nonlinearities in the articulatory-acoustic and acoustic-perceptual relations to define landmark times at which the speech spectrum abruptly changes or reaches an extremum; frames overlapping landmarks have been demonstrated to be sufficient for speech perception. In this work, experiments are conducted on the TIMIT corpus, with both Gaussian mixture model (GMM) and deep neural network (DNN)-based ASR systems, and it is found that frames containing landmarks are more informative for ASR than others. It is discovered that altering the level of emphasis on landmarks by re-weighting acoustic likelihood tends to reduce the phone error rate (PER). Furthermore, by leveraging the landmark as a heuristic, one of the hybrid DNN frame dropping strategies maintained a PER within 0.44% of optimal when scoring less than half (45.8% to be precise) of the frames. This hybrid strategy outperforms other non-heuristic-based methods and demonstrate the potential of landmarks for reducing computation.

Original languageEnglish (US)
Pages (from-to)3207-3219
Number of pages13
JournalJournal of the Acoustical Society of America
Volume143
Issue number6
DOIs
StatePublished - Jun 1 2018

    Fingerprint

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Acoustics and Ultrasonics

Cite this