Joint Optimization of Masks and Deep Recurrent Neural Networks for Monaural Source Separation

Po Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis

Research output: Contribution to journalArticle

Abstract

Monaural source separation is important for many real world applications. It is challenging because, with only a single channel of information available, without any constraints, an infinite number of solutions are possible. In this paper, we explore joint optimization of masking functions and deep recurrent neural networks for monaural source separation tasks, including speech separation, singing voice separation, and speech denoising. The joint optimization of the deep recurrent neural networks with an extra masking layer enforces a reconstruction constraint. Moreover, we explore a discriminative criterion for training neural networks to further enhance the separation performance. We evaluate the proposed system on the TSP, MIR-1K, and TIMIT datasets for speech separation, singing voice separation, and speech denoising tasks, respectively. Our approaches achieve 2.30-4.98 dB SDR gain compared to NMF models in the speech separation task, 2.30-2.48 dB GNSDR gain and 4.32-5.42 dB GSIR gain compared to existing models in the singing voice separation task, and outperform NMF and DNN baselines in the speech denoising task.

Original languageEnglish (US)
Pages (from-to)2136-2147
Number of pages12
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume23
Issue number12
DOIs
StatePublished - Dec 1 2015

Keywords

  • Deep recurrent neural network (DRNN)
  • discriminative training
  • monaural source separation
  • time-frequency masking

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Acoustics and Ultrasonics
  • Computational Mathematics
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Joint Optimization of Masks and Deep Recurrent Neural Networks for Monaural Source Separation'. Together they form a unique fingerprint.

  • Cite this