Toward sound-based synthesis: The far-field case

Research output: Contribution to journalArticlepeer-review

Abstract

We consider the problem of synthesizing the sound at any desired position and time from the recording of a set of microphones. Similar to the image-based rendering approach for vision, we propose a sound-based synthesis approach for sound. In this approach, audio signals at new positions are interpolated directly from the recorded signals of nearby microphones. The key underlying problems for sound-based synthesis are sampling and reconstruction of the sound field. We provide a spectral analysis of the sound field under the far-field assumption. Based on this analysis, we derive the minimum sampling and optimal reconstruction for several common settings.

Original languageEnglish (US)
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2
StatePublished - 2004

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Toward sound-based synthesis: The far-field case'. Together they form a unique fingerprint.

Cite this