A unifed model for activity recognition from video sequences

Esther Resendiz, Narendra Ahuja

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose an activity recognition algorithm that utilizes a unified spatial-frequency model of motion to recognize large-scale differences in action using global statistics, and subsequently distinguishes between motions with similar global statistics by spatially localizing the moving objects. We model the Fourier transforms of translating rigid objects in a video, since the Fourier domain inherently groups regions of the video with similar motion in high energy concentrations within its domain to make global motion detectable. Frequency-domain statistics can be used to isolate the frames that both adhere to our model and contain similar global motion, thus we can separate activities into broader classes based on their global motion. A leastsquares solution is then solved to isolate the spatially discriminative object configurations that produce similar global motion statistics. This model provides a unified framework to form concise globally-optimal spatial and motion descriptors necessary for discriminating activities. Experimental results are demonstrated on a human activity dataset.

Original languageEnglish (US)
Title of host publication2008 19th International Conference on Pattern Recognition, ICPR 2008
StatePublished - 2008
Event2008 19th International Conference on Pattern Recognition, ICPR 2008 - Tampa, FL, United States
Duration: Dec 8 2008Dec 11 2008

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651

Other

Other2008 19th International Conference on Pattern Recognition, ICPR 2008
Country/TerritoryUnited States
CityTampa, FL
Period12/8/0812/11/08

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'A unifed model for activity recognition from video sequences'. Together they form a unique fingerprint.

Cite this