Searching video for complex activities with finite state models

Nazli Ikizler, David Forsyth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. We show results for a large range of queries applied to a collection of complex motion and activity. Our models of short time scale limb behaviour are built using labelled motion capture set. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by the changes of clothing.

Original languageEnglish (US)
Title of host publication2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
DOIs
StatePublished - Oct 11 2007
Event2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07 - Minneapolis, MN, United States
Duration: Jun 17 2007Jun 22 2007

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Other

Other2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
CountryUnited States
CityMinneapolis, MN
Period6/17/076/22/07

Fingerprint

Query languages
Data acquisition

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Cite this

Ikizler, N., & Forsyth, D. (2007). Searching video for complex activities with finite state models. In 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07 [4270193] (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition). https://doi.org/10.1109/CVPR.2007.383168

Searching video for complex activities with finite state models. / Ikizler, Nazli; Forsyth, David.

2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07. 2007. 4270193 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ikizler, N & Forsyth, D 2007, Searching video for complex activities with finite state models. in 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07., 4270193, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07, Minneapolis, MN, United States, 6/17/07. https://doi.org/10.1109/CVPR.2007.383168
Ikizler N, Forsyth D. Searching video for complex activities with finite state models. In 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07. 2007. 4270193. (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition). https://doi.org/10.1109/CVPR.2007.383168
Ikizler, Nazli ; Forsyth, David. / Searching video for complex activities with finite state models. 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07. 2007. (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).
@inproceedings{fb11ffac7c4f4746a260cf8d280daf6b,
title = "Searching video for complex activities with finite state models",
abstract = "We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. We show results for a large range of queries applied to a collection of complex motion and activity. Our models of short time scale limb behaviour are built using labelled motion capture set. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by the changes of clothing.",
author = "Nazli Ikizler and David Forsyth",
year = "2007",
month = "10",
day = "11",
doi = "10.1109/CVPR.2007.383168",
language = "English (US)",
isbn = "1424411807",
series = "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition",
booktitle = "2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07",

}

TY - GEN

T1 - Searching video for complex activities with finite state models

AU - Ikizler, Nazli

AU - Forsyth, David

PY - 2007/10/11

Y1 - 2007/10/11

N2 - We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. We show results for a large range of queries applied to a collection of complex motion and activity. Our models of short time scale limb behaviour are built using labelled motion capture set. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by the changes of clothing.

AB - We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. We show results for a large range of queries applied to a collection of complex motion and activity. Our models of short time scale limb behaviour are built using labelled motion capture set. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by the changes of clothing.

UR - http://www.scopus.com/inward/record.url?scp=34948857983&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=34948857983&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2007.383168

DO - 10.1109/CVPR.2007.383168

M3 - Conference contribution

AN - SCOPUS:34948857983

SN - 1424411807

SN - 9781424411801

T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

BT - 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07

ER -