Vision-based speaker detection using Bayesian networks

James M. Rehg, Kevin P. Murphy, Paul W. Fieguth

Research output: Contribution to journalConference articlepeer-review

Abstract

The development of user interfaces based on vision and speech requires the solution of a challenging statistical inference problem: The intentions and actions of multiple individuals must be inferred from noisy and ambiguous data. We argue that Bayesian network models are an attractive statistical framework for cue fusion in these applications. Bayes nets combine a natural mechanism for expressing contextual information with efficient algorithms for learning and inference. We illustrate these points through the development of a Bayes net model for detecting when a user is speaking. The model combines four simple vision sensors: face detection, skin color, skin texture, and mouth motion. We present some promising experimental results.

Original languageEnglish (US)
Pages (from-to)110-116
Number of pages7
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume2
StatePublished - 1999
Externally publishedYes
EventProceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'99) - Fort Collins, CO, USA
Duration: Jun 23 1999Jun 25 1999

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Vision-based speaker detection using Bayesian networks'. Together they form a unique fingerprint.

Cite this