Automatic annotation of everyday movements

Deva Ramanan, D. A. Forsyth

Research output: Chapter in Book/Report/Conference proceedingConference contribution


This paper describes a system that can annotate a video sequence with: a description of the appearance of each actor; when the actor is in view; and a representation of the actor's activity while in view. The system does not require a fixed background, and is automatic. The system works by (1) tracking people in 2D and then, using an annotated motion capture dataset, (2) synthesizing an annotated 3D motion sequence matching the 2D tracks. The 3D motion capture data is manually annotated off-line using a class structure that describes everyday motions and allows motion annotations to be composed - one may jump while running, for example. Descriptions computed from video of real motions show that the method is accurate.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 16 - Proceedings of the 2003 Conference, NIPS 2003
PublisherNeural information processing systems foundation
ISBN (Print)0262201526, 9780262201520
StatePublished - Jan 1 2004
Externally publishedYes
Event17th Annual Conference on Neural Information Processing Systems, NIPS 2003 - Vancouver, BC, Canada
Duration: Dec 8 2003Dec 13 2003

Publication series

NameAdvances in Neural Information Processing Systems
ISSN (Print)1049-5258


Other17th Annual Conference on Neural Information Processing Systems, NIPS 2003
CityVancouver, BC

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing


Dive into the research topics of 'Automatic annotation of everyday movements'. Together they form a unique fingerprint.

Cite this