Modeling actions through state changes

Alireza Fathi, James M. Rehg

Research output: Contribution to journalConference articlepeer-review

Abstract

In this paper we present a model of action based on the change in the state of the environment. Many actions involve similar dynamics and hand-object relationships, but differ in their purpose and meaning. The key to differentiating these actions is the ability to identify how they change the state of objects and materials in the environment. We propose a weakly supervised method for learning the object and material states that are necessary for recognizing daily actions. Once these state detectors are learned, we can apply them to input videos and pool their outputs to detect actions. We further demonstrate that our method can be used to segment discrete actions from a continuous video of an activity. Our results outperform state-of-the-art action recognition and activity segmentation results.

Original languageEnglish (US)
Article number6619177
Pages (from-to)2579-2586
Number of pages8
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
StatePublished - 2013
Externally publishedYes
Event26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013 - Portland, OR, United States
Duration: Jun 23 2013Jun 28 2013

Keywords

  • Action Recognition
  • Egocentric
  • Object
  • Smi-Supervised Learning
  • State

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Modeling actions through state changes'. Together they form a unique fingerprint.

Cite this