How does CONDENSATION behave with a finite number of samples?

O. King, D. A. Forsyth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Condensation is a popular algorithm for sequential inference that resamples a sampled representation of the posterior. The algorithm is known to be asymptotically correct as the number of samples tends to infinity. However, the resampling phase involves a loss of information. The sequence of representations produced by the algorithm is a Markov chain, which is usually inhomogeneous. We show simple discrete examples where this chain is homogeneous and has absorbing states. In these examples, the representation moves to one of these states in time apparently linear in the number of samples and remains there. This phenomenon appears in the continuous case as well, where the algorithm tends to produce \clumpy" representations. In practice, this means that different runs of a tracker on the same data can give very different answers, while a particular run of the tracker will look stable. Furthermore, the state of the tracker can collapse to a single peak | which has non-zero probability of being the wrong peak | within time linear in the number of samples, and the tracker can appear to be following tight peaks in the posterior even in the absence of any meaningful measurement. This means that, if theoretical lower bounds on the number of samples are not available, experiments must be very carefully designed to avoid these effects.

Original languageEnglish (US)
Title of host publicationComputer Vision - ECCV 2000 - 6th European Conference on Computer Vision, Proceedings
EditorsDavid Vernon
PublisherSpringer-Verlag
Pages695-709
Number of pages15
ISBN (Print)3540676856
StatePublished - Jan 1 2000
Externally publishedYes
Event6th European Conference on Computer Vision, ECCV 2000 - Dublin, Ireland
Duration: Jun 26 2000Jul 1 2000

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume1842
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other6th European Conference on Computer Vision, ECCV 2000
CountryIreland
CityDublin
Period6/26/007/1/00

Fingerprint

Linear Time
Tend
Resampling
Condensation
Absorbing
Markov processes
Markov chain
Infinity
Lower bound
Experiment
Experiments

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

King, O., & Forsyth, D. A. (2000). How does CONDENSATION behave with a finite number of samples? In D. Vernon (Ed.), Computer Vision - ECCV 2000 - 6th European Conference on Computer Vision, Proceedings (pp. 695-709). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1842). Springer-Verlag.

How does CONDENSATION behave with a finite number of samples? / King, O.; Forsyth, D. A.

Computer Vision - ECCV 2000 - 6th European Conference on Computer Vision, Proceedings. ed. / David Vernon. Springer-Verlag, 2000. p. 695-709 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1842).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

King, O & Forsyth, DA 2000, How does CONDENSATION behave with a finite number of samples? in D Vernon (ed.), Computer Vision - ECCV 2000 - 6th European Conference on Computer Vision, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1842, Springer-Verlag, pp. 695-709, 6th European Conference on Computer Vision, ECCV 2000, Dublin, Ireland, 6/26/00.
King O, Forsyth DA. How does CONDENSATION behave with a finite number of samples? In Vernon D, editor, Computer Vision - ECCV 2000 - 6th European Conference on Computer Vision, Proceedings. Springer-Verlag. 2000. p. 695-709. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
King, O. ; Forsyth, D. A. / How does CONDENSATION behave with a finite number of samples?. Computer Vision - ECCV 2000 - 6th European Conference on Computer Vision, Proceedings. editor / David Vernon. Springer-Verlag, 2000. pp. 695-709 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{321ad002affd46a4b91b31ae72a0f050,
title = "How does CONDENSATION behave with a finite number of samples?",
abstract = "Condensation is a popular algorithm for sequential inference that resamples a sampled representation of the posterior. The algorithm is known to be asymptotically correct as the number of samples tends to infinity. However, the resampling phase involves a loss of information. The sequence of representations produced by the algorithm is a Markov chain, which is usually inhomogeneous. We show simple discrete examples where this chain is homogeneous and has absorbing states. In these examples, the representation moves to one of these states in time apparently linear in the number of samples and remains there. This phenomenon appears in the continuous case as well, where the algorithm tends to produce \clumpy{"} representations. In practice, this means that different runs of a tracker on the same data can give very different answers, while a particular run of the tracker will look stable. Furthermore, the state of the tracker can collapse to a single peak | which has non-zero probability of being the wrong peak | within time linear in the number of samples, and the tracker can appear to be following tight peaks in the posterior even in the absence of any meaningful measurement. This means that, if theoretical lower bounds on the number of samples are not available, experiments must be very carefully designed to avoid these effects.",
author = "O. King and Forsyth, {D. A.}",
year = "2000",
month = "1",
day = "1",
language = "English (US)",
isbn = "3540676856",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer-Verlag",
pages = "695--709",
editor = "David Vernon",
booktitle = "Computer Vision - ECCV 2000 - 6th European Conference on Computer Vision, Proceedings",

}

TY - GEN

T1 - How does CONDENSATION behave with a finite number of samples?

AU - King, O.

AU - Forsyth, D. A.

PY - 2000/1/1

Y1 - 2000/1/1

N2 - Condensation is a popular algorithm for sequential inference that resamples a sampled representation of the posterior. The algorithm is known to be asymptotically correct as the number of samples tends to infinity. However, the resampling phase involves a loss of information. The sequence of representations produced by the algorithm is a Markov chain, which is usually inhomogeneous. We show simple discrete examples where this chain is homogeneous and has absorbing states. In these examples, the representation moves to one of these states in time apparently linear in the number of samples and remains there. This phenomenon appears in the continuous case as well, where the algorithm tends to produce \clumpy" representations. In practice, this means that different runs of a tracker on the same data can give very different answers, while a particular run of the tracker will look stable. Furthermore, the state of the tracker can collapse to a single peak | which has non-zero probability of being the wrong peak | within time linear in the number of samples, and the tracker can appear to be following tight peaks in the posterior even in the absence of any meaningful measurement. This means that, if theoretical lower bounds on the number of samples are not available, experiments must be very carefully designed to avoid these effects.

AB - Condensation is a popular algorithm for sequential inference that resamples a sampled representation of the posterior. The algorithm is known to be asymptotically correct as the number of samples tends to infinity. However, the resampling phase involves a loss of information. The sequence of representations produced by the algorithm is a Markov chain, which is usually inhomogeneous. We show simple discrete examples where this chain is homogeneous and has absorbing states. In these examples, the representation moves to one of these states in time apparently linear in the number of samples and remains there. This phenomenon appears in the continuous case as well, where the algorithm tends to produce \clumpy" representations. In practice, this means that different runs of a tracker on the same data can give very different answers, while a particular run of the tracker will look stable. Furthermore, the state of the tracker can collapse to a single peak | which has non-zero probability of being the wrong peak | within time linear in the number of samples, and the tracker can appear to be following tight peaks in the posterior even in the absence of any meaningful measurement. This means that, if theoretical lower bounds on the number of samples are not available, experiments must be very carefully designed to avoid these effects.

UR - http://www.scopus.com/inward/record.url?scp=0007997129&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0007997129&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0007997129

SN - 3540676856

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 695

EP - 709

BT - Computer Vision - ECCV 2000 - 6th European Conference on Computer Vision, Proceedings

A2 - Vernon, David

PB - Springer-Verlag

ER -