Searching human behaviors using spatial-temporal words

Huazhong Ning, Yuxiao Hu, Thomas S. Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper proposes an approach to searching human behaviors in videos using spatial-temporal words which are learnt from unlabelled data with various human behaviors through unsupervised learning. Both the query and the searched videos are represented by codewords frequencies, which capture the intrinsic information of motion and appearance of human behaviors. This representation further enables us to make use of integral histograms to accelerate the searching procedure. The performance also benefits from our feature representation that, through a MAX-like operation, may sim-late the cortical equivalent of the machine-vision "window of analysis"[1]. Examples of challenging sequences with complex behaviors, including tennis and ballet, are shown.

Original languageEnglish (US)
Title of host publication2007 IEEE International Conference on Image Processing, ICIP 2007 Proceedings
PagesVI337-VI340
DOIs
StatePublished - 2006
Externally publishedYes
Event14th IEEE International Conference on Image Processing, ICIP 2007 - San Antonio, TX, United States
Duration: Sep 16 2007Sep 19 2007

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume6
ISSN (Print)1522-4880

Other

Other14th IEEE International Conference on Image Processing, ICIP 2007
Country/TerritoryUnited States
CitySan Antonio, TX
Period9/16/079/19/07

Keywords

  • Human behavior searching
  • Spatial-temporal words
  • Video matching

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint

Dive into the research topics of 'Searching human behaviors using spatial-temporal words'. Together they form a unique fingerprint.

Cite this