Searching for complex human activities with no visual examples

Research output: Contribution to journalArticle

Abstract

We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. Our models of short time scale limb behaviour are built using labelled motion capture set. We show results for a large range of queries applied to a collection of complex motion and activity. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by some important changes of clothing.

Original languageEnglish (US)
Pages (from-to)337-357
Number of pages21
JournalInternational Journal of Computer Vision
Volume80
Issue number3
DOIs
StatePublished - Dec 1 2008

Fingerprint

Query languages
Data acquisition

Keywords

  • Activity
  • HMM
  • Human action recognition
  • Motion capture
  • Video retrieval

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Cite this

Searching for complex human activities with no visual examples. / Ikizler, Nazli; Forsyth, David Alexander.

In: International Journal of Computer Vision, Vol. 80, No. 3, 01.12.2008, p. 337-357.

Research output: Contribution to journalArticle

@article{5e443bba0d3542eba122ec45f307b30c,
title = "Searching for complex human activities with no visual examples",
abstract = "We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. Our models of short time scale limb behaviour are built using labelled motion capture set. We show results for a large range of queries applied to a collection of complex motion and activity. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by some important changes of clothing.",
keywords = "Activity, HMM, Human action recognition, Motion capture, Video retrieval",
author = "Nazli Ikizler and Forsyth, {David Alexander}",
year = "2008",
month = "12",
day = "1",
doi = "10.1007/s11263-008-0142-8",
language = "English (US)",
volume = "80",
pages = "337--357",
journal = "International Journal of Computer Vision",
issn = "0920-5691",
publisher = "Springer Netherlands",
number = "3",

}

TY - JOUR

T1 - Searching for complex human activities with no visual examples

AU - Ikizler, Nazli

AU - Forsyth, David Alexander

PY - 2008/12/1

Y1 - 2008/12/1

N2 - We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. Our models of short time scale limb behaviour are built using labelled motion capture set. We show results for a large range of queries applied to a collection of complex motion and activity. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by some important changes of clothing.

AB - We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. Our models of short time scale limb behaviour are built using labelled motion capture set. We show results for a large range of queries applied to a collection of complex motion and activity. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by some important changes of clothing.

KW - Activity

KW - HMM

KW - Human action recognition

KW - Motion capture

KW - Video retrieval

UR - http://www.scopus.com/inward/record.url?scp=52449115894&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=52449115894&partnerID=8YFLogxK

U2 - 10.1007/s11263-008-0142-8

DO - 10.1007/s11263-008-0142-8

M3 - Article

AN - SCOPUS:52449115894

VL - 80

SP - 337

EP - 357

JO - International Journal of Computer Vision

JF - International Journal of Computer Vision

SN - 0920-5691

IS - 3

ER -