Abstract
We consider the problem of synthesizing the sound at any desired position and time from the recording of a set of microphones. Similar to the image-based rendering approach for vision, we propose a sound-based synthesis approach for sound. In this approach, audio signals at new positions are interpolated directly from the recorded signals of nearby microphones. The key underlying problems for sound-based synthesis are sampling and reconstruction of the sound field. We provide a spectral analysis of the sound field under the far-field assumption. Based on this analysis, we derive the minimum sampling and optimal reconstruction for several common settings.
Original language | English (US) |
---|---|
Pages (from-to) | II601-II604 |
Journal | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
Volume | 2 |
State | Published - 2004 |
Event | Proceedings - IEEE International Conference on Acoustics, Speech, and Signal Processing - Montreal, Que, Canada Duration: May 17 2004 → May 21 2004 |
ASJC Scopus subject areas
- Software
- Signal Processing
- Electrical and Electronic Engineering