We consider the problem of synthesizing the sound at any desired position and time from the recording of a set of microphones. Similar to the image-based rendering approach for vision, we propose a sound-based synthesis approach for sound. In this approach, audio signals at new positions are interpolated directly from the recorded signals of nearby microphones. The key underlying problems for sound-based synthesis are sampling and reconstruction of the sound field. We provide a spectral analysis of the sound field under the far-field assumption. Based on this analysis, we derive the minimum sampling and optimal reconstruction for several common settings.
|Original language||English (US)|
|Journal||ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings|
|State||Published - 2004|
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering