Abstract
In this paper, we present a method for jointly-learning a microphone selection mechanism and a speech enhancement network for multi-channel speech enhancement with an ad-hoc microphone array. The attention-based microphone selection mechanism is trained to reduce communication-costs through a penalty term which represents a task-performance/communication-cost trade-off. While working within the trade-off, our method can intelligently stream from more microphones in lower SNR scenes and fewer microphones in higher SNR scenes. We evaluate the model in complex echoic acoustic scenes with moving sources and show that it matches the performance of models that stream from a fixed number of microphones while reducing communication costs.
Original language | English (US) |
---|---|
Pages (from-to) | 8438-8442 |
Number of pages | 5 |
Journal | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
Volume | 2021-June |
DOIs | |
State | Published - 2021 |
Event | 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada Duration: Jun 6 2021 → Jun 11 2021 |
Keywords
- Ad-hoc microphone array
- Beamforming
- Deep learning
- Sensor selection
- Speech enhancement
ASJC Scopus subject areas
- Software
- Signal Processing
- Electrical and Electronic Engineering