Abstract
Nowadays, multiple video cameras are employed for live broadcast and recording of almost all major social events, and all these camera streams have to be aggregated and rendered into one video program for audiences. While this content composition process aims at presenting the most interesting perspective of an event, it leads to the problem of how to fully customize the finally composed video program to different audience interests without requiring too much input from the audience. The goal of this work is to solve this problem by proposing the Automatic Video Production with User Customization (AVPUC) system that separates the video stream interestingness comparison from video program rendering to provide space for maximized customization. Human-controlled video selection and automatic video evaluation are combined to support video content customization and reduce redundant audience inputs. Preliminary evaluation results confirm that AVPUC's capturing-evaluation-render model for video production improves audiences' satisfaction for customized multi-perspective viewing of social events.
Original language | English (US) |
---|---|
Article number | 21 |
Pages (from-to) | 203-215 |
Number of pages | 13 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 5680 |
DOIs | |
State | Published - 2005 |
Event | Proceedings of SPIE-IS and T Electronic Imaging - Multimedia Computing and Networking 2005 - San Jose, CA, United States Duration: Jan 19 2005 → Jan 20 2005 |
ASJC Scopus subject areas
- Electronic, Optical and Magnetic Materials
- Condensed Matter Physics
- Computer Science Applications
- Applied Mathematics
- Electrical and Electronic Engineering