Model-based sequential organization in cochannel speech

Yang Shao, Deliang Wang

Research output: Contribution to journalArticlepeer-review

Abstract

A human listener has the ability to follow a speaker's voice while others are speaking simultaneously; in particular, the listener can organize the time-frequency energy of the same speaker across time into a single stream. In this paper, we focus on sequential organization in cochannel speech, or mixtures of two voices. We extract minimally corrupted segments, or usable speech, in cochannel speech using a robust multipitch tracking algorithm. The extracted usable speech is shown to capture speaker characteristics and improves speaker identification (SID) performance across various target-to-interferer ratios. To utilize speaker characteristics for sequential organization, we extend the traditional SID framework to cochannel speech and derive a joint objective for sequential grouping and SID, leading to a problem of search for the optimum hypothesis. Subsequently we propose a hypothesis pruning algorithm based on speaker models in order to make the search computationally efficient. Evaluation results show that the proposed system approaches the ceiling SID performance obtained with prior pitch information and yields significant improvement over alternative approaches to sequential organization.

Original languageEnglish (US)
Pages (from-to)289-298
Number of pages10
JournalIEEE Transactions on Audio, Speech and Language Processing
Volume14
Issue number1
DOIs
StatePublished - Jan 2006
Externally publishedYes

Keywords

  • Auditory scene analysis
  • Cochannel speech
  • Model-based approach
  • Sequential organization
  • Speaker identification (sid)
  • Usable speech

ASJC Scopus subject areas

  • Acoustics and Ultrasonics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Model-based sequential organization in cochannel speech'. Together they form a unique fingerprint.

Cite this