ASR for under-resourced languages from probabilistic transcription

Mark A. Hasegawa-Johnson, Preethi Jyothi, Daniel McCloy, Majid Mirbagheri, Giovanni M.Di Liberto, Amit Das, Bradley Ekin, Chunxi Liu, Vimal Manohar, Hao Tang, Edmund C. Lalor, Nancy F. Chen, Paul Hager, Tyler Kekona, Rose Sloan, Adrian K.C. Lee

Research output: Contribution to journalArticle

Abstract

In many under-resourced languages it is possible to find text, and it is possible to find speech, but transcribed speech suitable for training automatic speech recognition (ASR) is unavailable. In the absence of native transcripts, this paper proposes the use of a probabilistic transcript: A probability mass function over possible phonetic transcripts of the waveform. Three sources of probabilistic transcripts are demonstrated. First, self-training is a well-established semisupervised learning technique, in which a cross-lingual ASR first labels unlabeled speech, and is then adapted using the same labels. Second, mismatched crowdsourcing is a recent technique in which nonspeakers of the language are asked to write what they hear, and their nonsense transcripts are decoded using noisy channel models of second-language speech perception. Third, EEG distribution coding is a new technique in which nonspeakers of the language listen to it, and their electrocortical response signals are interpreted to indicate probabilities. ASR was trained in four languages without native transcripts. Adaptation using mismatched crowdsourcing significantly outperformed self-training, and both significantly outperformed a cross-lingual baseline. Both EEG distribution coding and text-derived phone language models were shown to improve the quality of probabilistic transcripts derived from mismatched crowdsourcing.

Original languageEnglish (US)
Pages (from-to)46-59
Number of pages14
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume25
Issue number1
DOIs
StatePublished - Jan 2017

Keywords

  • Automatic speech recognition
  • EEG
  • mismatched crowdsourcing
  • under-resourced languages

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Acoustics and Ultrasonics
  • Computational Mathematics
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'ASR for under-resourced languages from probabilistic transcription'. Together they form a unique fingerprint.

  • Cite this

    Hasegawa-Johnson, M. A., Jyothi, P., McCloy, D., Mirbagheri, M., Liberto, G. M. D., Das, A., Ekin, B., Liu, C., Manohar, V., Tang, H., Lalor, E. C., Chen, N. F., Hager, P., Kekona, T., Sloan, R., & Lee, A. K. C. (2017). ASR for under-resourced languages from probabilistic transcription. IEEE/ACM Transactions on Audio Speech and Language Processing, 25(1), 46-59. https://doi.org/10.1109/TASLP.2016.2621659