Detecting human actions in surveillance videos

Ming Yang, Shuiwang Ji, Wei Xu, Jinjun Wang, Fengjun Lv, Kai Yu, Yihong Gong, Mert Dikmen, Dennis J. Lin, Thomas S Huang

Research output: Contribution to conferencePaper

Abstract

This notebook paper summarizes Team NEC-UIUC's approaches for TRECVid 2009 Evaluation of Surveillance Event Detection. Our submissions include two types of systems. One system employs the brute force search method to test each space-time location in the video by a binary classifier on whether a specific event occurs. The other system takes advantage of human detection and tracking to avoid the costly brute force search and evaluates the candidate space-time cubes by combining 3D convolutional neural networks (CNN) and SVM classifiers based on bag-ofwords local features to detect the presence of events of interests. Via thorough cross-validation on the development set, we select proper combining weights and thresholds to minimize the detection cost rates (DCR). Our systems achieve good performance on event categories which involve actions of a single person, e.g. CellToEar, ObjectPut, and Pointing.

Original languageEnglish (US)
StatePublished - Jan 1 2009
EventTREC Video Retrieval Evaluation, TRECVID 2009 - Gaithersburg, MD, United States
Duration: Nov 16 2009Nov 17 2009

Other

OtherTREC Video Retrieval Evaluation, TRECVID 2009
CountryUnited States
CityGaithersburg, MD
Period11/16/0911/17/09

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Software

Fingerprint Dive into the research topics of 'Detecting human actions in surveillance videos'. Together they form a unique fingerprint.

  • Cite this

    Yang, M., Ji, S., Xu, W., Wang, J., Lv, F., Yu, K., Gong, Y., Dikmen, M., Lin, D. J., & Huang, T. S. (2009). Detecting human actions in surveillance videos. Paper presented at TREC Video Retrieval Evaluation, TRECVID 2009, Gaithersburg, MD, United States.