Zero-shot test time adaptation via knowledge distillation for personalized speech denoising and dereverberation

Sunwoo Kim, Mrudula Athi, Guangji Shi, Minje Kim, Trausti Kristjansson

Research output: Contribution to journalArticlepeer-review


A personalization framework to adapt compact models to test time environments and improve their speech enhancement (SE) performance in noisy and reverberant conditions is proposed. The use-cases are when the end-user device encounters only one or a few speakers and noise types that tend to reoccur in the specific acoustic environment. Hence, a small personalized model that is sufficient to handle this focused subset of the original universal SE problem is postulated. The study addresses a major data shortage issue: although the goal is to learn from a specific user's speech signals and the test time environment, the target clean speech is unavailable for model training due to privacy-related concerns and technical difficulty of recording noise and reverberation-free voice signals. The proposed zero-shot personalization method uses no clean speech target. Instead, it employs the knowledge distillation framework, where the more advanced denoising results from an overly large teacher work as pseudo targets to train a small student model. Evaluation on various test time conditions suggests that the proposed personalization approach can significantly enhance the compact student model's test time performance. Personalized models outperform larger non-personalized baseline models, demonstrating that personalization achieves model compression with no loss in dereverberation and denoising performance.

Original languageEnglish (US)
Pages (from-to)1353-1367
Number of pages15
JournalJournal of the Acoustical Society of America
Issue number2
StatePublished - Feb 1 2024
Externally publishedYes

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Acoustics and Ultrasonics


Dive into the research topics of 'Zero-shot test time adaptation via knowledge distillation for personalized speech denoising and dereverberation'. Together they form a unique fingerprint.

Cite this