A Framework for Unified Real-Time Personalized and Non-Personalized Speech Enhancement

Zhepei Wang, Ritwik Giri, Devansh Shah, Jean Marc Valin, Michael M. Goodwin, Paris Smaragdis

Research output: Contribution to journalConference articlepeer-review

Abstract

In this study, we present an approach to train a single speech enhancement network that can perform both personalized and non-personalized speech enhancement. This is achieved by incorporating a frame-wise conditioning input that specifies the type of enhancement output. To improve the quality of the enhanced output and mitigate oversuppression, we experiment with re-weighting frames by the presence or absence of speech activity and applying augmentations to speaker embeddings. By training under a multi-task learning setting, we empirically show that the proposed unified model obtains promising results on both personalized and non-personalized speech enhancement benchmarks and reaches similar performance to models that are trained specialized for either task. The strong performance of the proposed method demonstrates that the unified model is a more economical alternative compared to keeping separate task-specific models during inference.

Keywords

  • Speech enhancement
  • multi-task learning
  • real-time communication
  • speaker identification
  • voice activity detection

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'A Framework for Unified Real-Time Personalized and Non-Personalized Speech Enhancement'. Together they form a unique fingerprint.

Cite this