In the Eye of the Beholder: Gaze and Actions in First Person Video

Yin Li, Miao Liu, Jame Rehg

Research output: Contribution to journalArticlepeer-review

Abstract

We address the task of jointly determining what a person is doing and where they are looking based on the analysis of video captured by a headworn camera. To facilitate our research, we first introduce the EGTEA Gaze+ dataset. Our dataset comes with videos, gaze tracking data, hand masks and action annotations, thereby providing the most comprehensive benchmark for First Person Vision (FPV). Moving beyond the dataset, we propose a novel deep model for joint gaze estimation and action recognition in FPV. Our method describes the participant's gaze as a probabilistic variable and models its distribution using stochastic units in a deep network. We further sample from these stochastic units, generating an attention map to guide the aggregation of visual features for action recognition. Our method is evaluated on our EGTEA Gaze+ dataset and achieves a performance level that exceeds the state-of-the-art by a significant margin. More importantly, we demonstrate that our model can be applied to larger scale FPV dataset---EPIC-Kitchens even without using gaze, offering new state-of-the-art results on FPV action recognition.

Original languageEnglish (US)
JournalIEEE transactions on pattern analysis and machine intelligence
DOIs
StateAccepted/In press - 2021
Externally publishedYes

Keywords

  • Action recognition
  • Benchmark testing
  • Cameras
  • Convolution
  • deep probabilistic models
  • first person vision
  • gaze estimation
  • Gaze tracking
  • Stochastic processes
  • Three-dimensional displays
  • video analysis
  • Visualization

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'In the Eye of the Beholder: Gaze and Actions in First Person Video'. Together they form a unique fingerprint.

Cite this