Finding and tracking people from the bottom up

Deva Ramanan, D. A. Forsyth

Research output: Contribution to journalConference articlepeer-review

Abstract

We describe a tracker that can track moving people in long sequences without manual initialization. Moving people are modeled with the assumption that, while configuration can vary quite substantially from frame to frame, appearance does not. This leads to an algorithm that firstly builds a model of the appearance of the body of each individual by clustering candidate body segments, and then uses this model to find all individuals in each frame. Unusually, the tracker does not rely on a model of human dynamics to identify possible instances of people; such models are unreliable, because human motion is fast and large accelerations are common. We show our tracking algorithm can be interpreted as a loopy inference procedure on an underlying Bayes net. Experiments on video of real scenes demonstrate that this tracker can (a) count distinct individuals; (b) identify and track them; (c) recover when it loses track, for example, if individuals are occluded or briefly leave the view; (d) identify the configuration of the body largely correctly; and (e) is not dependent on particular models of human motion.

Original languageEnglish (US)
Pages (from-to)II/467-II/474
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume2
StatePublished - 2003
Externally publishedYes
Event2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2003 - Madison, WI, United States
Duration: Jun 18 2003Jun 20 2003

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Finding and tracking people from the bottom up'. Together they form a unique fingerprint.

Cite this