A computational auditory scene analysis system for speech segregation and robust speech recognition

Yang Shao, Soundararajan Srinivasan, Zhaozhang Jin, De Liang Wang

Research output: Contribution to journalArticlepeer-review

Abstract

A conventional automatic speech recognizer does not perform well in the presence of multiple sound sources, while human listeners are able to segregate and recognize a signal of interest through auditory scene analysis. We present a computational auditory scene analysis system for separating and recognizing target speech in the presence of competing speech or noise. We estimate, in two stages, the ideal binary time-frequency (T-F) mask which retains the mixture in a local T-F unit if and only if the target is stronger than the interference within the unit. In the first stage, we use harmonicity to segregate the voiced portions of individual sources in each time frame based on multipitch tracking. Additionally, unvoiced portions are segmented based on an onset/offset analysis. In the second stage, speaker characteristics are used to group the T-F units across time frames. The resulting masks are used in an uncertainty decoding framework for automatic speech recognition. We evaluate our system on a speech separation challenge and show that our system yields substantial improvement over the baseline performance.

Original languageEnglish (US)
Pages (from-to)77-93
Number of pages17
JournalComputer Speech and Language
Volume24
Issue number1
DOIs
StatePublished - Jan 2010
Externally publishedYes

Keywords

  • Binary time-frequency mask
  • Computational Auditory Scene Analysis
  • Robust speech recognition
  • Speech segregation
  • Uncertainty decoding

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'A computational auditory scene analysis system for speech segregation and robust speech recognition'. Together they form a unique fingerprint.

Cite this