A multimodality framework for creating speaker/non-speaker profile databases for real-world video

Jehanzeb Abbas, Charlie K. Dagli, Thomas S. Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose a complete solution to fall modality person-profiling for speakers and submodality person-profiling for non-speakers in real-world videos. This is a step towards building an elaborate database of face, name and voice correspondence for speakers appearing in the news videos. In addition we are also interested in only name and face correspondence database for non-speakers who appear during voice-overs. We use an unsupervised technique for creating a speaker identification database and a unique primary feature matching and parallel line matching algorithm for creating a non-speaker identification database. We tested our approach on real world data and the results show good performance for news videos. It can be incorporated as part of a larger multimedia news video analysis system or a multimedia search system for efficient news video retrieval and browsing.

Original languageEnglish (US)
Title of host publication2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
DOIs
StatePublished - 2007
Event2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07 - Minneapolis, MN, United States
Duration: Jun 17 2007Jun 22 2007

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Other

Other2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
Country/TerritoryUnited States
CityMinneapolis, MN
Period6/17/076/22/07

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'A multimodality framework for creating speaker/non-speaker profile databases for real-world video'. Together they form a unique fingerprint.

Cite this