A multimodal-sensor-enabled room for unobtrusive group meeting analysis

Indrani Bhattacharya, Tongtao Zhang, Heng Ji, Michael Foley, Christine Ku, Christoph Riedl, Richard J. Radke, Ni Zhang, Cameron Mine, Brooke Foucault Welles

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Group meetings can suffer from serious problems that undermine performance, including bias, “groupthink", fear of speaking, and unfocused discussion. To better understand these issues, propose interventions, and thus improve team performance, we need to study human dynamics in group meetings. However, this process currently heavily depends on manual coding and video cameras. Manual coding is tedious, inaccurate, and subjective, while active video cameras can affect the natural behavior of meeting participants. Here, we present a smart meeting room that combines microphones and unobtrusive ceiling-mounted Time-of-Flight (ToF) sensors to understand group dynamics in team meetings. We automatically process the multimodal sensor outputs with signal, image, and natural language processing algorithms to estimate participant head pose, visual focus of attention (VFOA), non-verbal speech patterns, and discussion content. We derive metrics from these automatic estimates and correlate them with user-reported rankings of emergent group leaders and major contributors to produce accurate predictors. We validate our algorithms and report results on a new dataset of lunar survival tasks of 36 individuals across 10 groups collected in the multimodal-sensor-enabled smart room.

Original languageEnglish (US)
Title of host publicationICMI 2018 - Proceedings of the 2018 International Conference on Multimodal Interaction
PublisherAssociation for Computing Machinery, Inc
Pages347-355
Number of pages9
ISBN (Electronic)9781450356923
DOIs
StatePublished - Oct 2 2018
Externally publishedYes
Event20th ACM International Conference on Multimodal Interaction, ICMI 2018 - Boulder, United States
Duration: Oct 16 2018Oct 20 2018

Publication series

NameICMI 2018 - Proceedings of the 2018 International Conference on Multimodal Interaction

Conference

Conference20th ACM International Conference on Multimodal Interaction, ICMI 2018
CountryUnited States
CityBoulder
Period10/16/1810/20/18

Keywords

  • Group meeting analysis
  • Head pose estimation
  • Meeting summarization
  • Multimodal sensing
  • Natural language processing
  • Smart rooms
  • Time-of-flight sensing

ASJC Scopus subject areas

  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Hardware and Architecture
  • Human-Computer Interaction

Fingerprint Dive into the research topics of 'A multimodal-sensor-enabled room for unobtrusive group meeting analysis'. Together they form a unique fingerprint.

  • Cite this

    Bhattacharya, I., Zhang, T., Ji, H., Foley, M., Ku, C., Riedl, C., Radke, R. J., Zhang, N., Mine, C., & Welles, B. F. (2018). A multimodal-sensor-enabled room for unobtrusive group meeting analysis. In ICMI 2018 - Proceedings of the 2018 International Conference on Multimodal Interaction (pp. 347-355). (ICMI 2018 - Proceedings of the 2018 International Conference on Multimodal Interaction). Association for Computing Machinery, Inc. https://doi.org/10.1145/3242969.3243022