Abstract
In this paper, we present a computational scene model and also derive novel algorithms for computing audio and visual scenes and within-scene structures in Films. We use constraints derived from film-making rules and from experimental results in the psychology of audition, in our computational scene model. Central to the computational model is the notion of a causal, finite-memory viewer model. We segment the audio and video data separately. In each case, we determine the degree of correlation of the most recent data in the memory with the past. The audio and video scene boundaries are determined using local maxima and minima, respectively. We derive four types of computable scenes that arise due to different kinds of audio and video scene boundary synchronizations. We show how to exploit the local topology of an image sequence in conjunction with statistical tests, to determine dialogs. We also derive a simple algorithm to detect silences in audio. An important feature of our work is to introduce semantic constraints based on structure and silence in our computational model. This results in computable scenes that are more consistent with human observations. The algorithms were tested on a difficult data set: three commercial films. We take the first hour of data from each of the three films. The best results: computational scene detection: 94%; dialogue detection: 91%; and recall 100% precision.
Original language | English (US) |
---|---|
Pages (from-to) | 482-491 |
Number of pages | 10 |
Journal | IEEE Transactions on Multimedia |
Volume | 4 |
Issue number | 4 |
DOIs | |
State | Published - Dec 2002 |
Externally published | Yes |
Keywords
- Computable scenes
- Film-making production rules
- Joint audio-visual segmentation
- Structure discovery
ASJC Scopus subject areas
- Signal Processing
- Media Technology
- Computer Science Applications
- Electrical and Electronic Engineering