TY - GEN
T1 - How does CONDENSATION behave with a finite number of samples?
AU - King, O.
AU - Forsyth, D. A.
N1 - Publisher Copyright:
© Springer-Verlag Berlin Heidelberg 2000.
PY - 2000
Y1 - 2000
N2 - Condensation is a popular algorithm for sequential inference that resamples a sampled representation of the posterior. The algorithm is known to be asymptotically correct as the number of samples tends to infinity. However, the resampling phase involves a loss of information. The sequence of representations produced by the algorithm is a Markov chain, which is usually inhomogeneous. We show simple discrete examples where this chain is homogeneous and has absorbing states. In these examples, the representation moves to one of these states in time apparently linear in the number of samples and remains there. This phenomenon appears in the continuous case as well, where the algorithm tends to produce \clumpy" representations. In practice, this means that different runs of a tracker on the same data can give very different answers, while a particular run of the tracker will look stable. Furthermore, the state of the tracker can collapse to a single peak | which has non-zero probability of being the wrong peak | within time linear in the number of samples, and the tracker can appear to be following tight peaks in the posterior even in the absence of any meaningful measurement. This means that, if theoretical lower bounds on the number of samples are not available, experiments must be very carefully designed to avoid these effects.
AB - Condensation is a popular algorithm for sequential inference that resamples a sampled representation of the posterior. The algorithm is known to be asymptotically correct as the number of samples tends to infinity. However, the resampling phase involves a loss of information. The sequence of representations produced by the algorithm is a Markov chain, which is usually inhomogeneous. We show simple discrete examples where this chain is homogeneous and has absorbing states. In these examples, the representation moves to one of these states in time apparently linear in the number of samples and remains there. This phenomenon appears in the continuous case as well, where the algorithm tends to produce \clumpy" representations. In practice, this means that different runs of a tracker on the same data can give very different answers, while a particular run of the tracker will look stable. Furthermore, the state of the tracker can collapse to a single peak | which has non-zero probability of being the wrong peak | within time linear in the number of samples, and the tracker can appear to be following tight peaks in the posterior even in the absence of any meaningful measurement. This means that, if theoretical lower bounds on the number of samples are not available, experiments must be very carefully designed to avoid these effects.
UR - http://www.scopus.com/inward/record.url?scp=0007997129&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0007997129&partnerID=8YFLogxK
U2 - 10.1007/3-540-45054-8_45
DO - 10.1007/3-540-45054-8_45
M3 - Conference contribution
AN - SCOPUS:0007997129
SN - 3540676856
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 695
EP - 709
BT - Computer Vision - ECCV 2000 - 6th European Conference on Computer Vision, Proceedings
A2 - Vernon, David
PB - Springer
T2 - 6th European Conference on Computer Vision, ECCV 2000
Y2 - 26 June 2000 through 1 July 2000
ER -