VACE multimodal meeting corpus

Lei Chen, R. Travis Rose, Ying Qiao, Irene Kimbara, Fey Parrill, Haleema Welji, Tony Xu Han, Jilin Tu, Zhongqiang Huang, Mary Harper, Francis Quek, Yingen Xiong, David McNeill, Ronald Tuttle, Thomas Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we report on the infrastructure we have developed to support our research on multimodal cues for understanding meetings. With our focus on multimodality, we investigate the interaction among speech, gesture, posture, and gaze in meetings. For this purpose, a high quality multimodal corpus is being produced.

Original languageEnglish (US)
Title of host publicationMachine Learning for Multimodal Interaction - Second International Workshop, MLMI 2005, Revised Selected Papers
PublisherSpringer
Pages40-51
Number of pages12
ISBN (Print)3540325492, 9783540325499
DOIs
StatePublished - 2006
Event2nd International Workshop on Machine Learning for Multimodal Interaction, MLMI 2005 - Edinburgh, United Kingdom
Duration: Jul 11 2005Jul 13 2005

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3869 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other2nd International Workshop on Machine Learning for Multimodal Interaction, MLMI 2005
Country/TerritoryUnited Kingdom
CityEdinburgh
Period7/11/057/13/05

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'VACE multimodal meeting corpus'. Together they form a unique fingerprint.

Cite this