Abstract
The Markov Chain Distributed Particle Filter (MCDPF) is an algorithm for the nodes in a sensor network to cooperatively run a particle filter, based on each sensor making updates to a local particle set using only local measurements, and then having particles exchanged between neighboring sensors based on a Markov chain on the network. This paper extends previously-known almost sure convergence results for the MCDPF to prove that the MCDPF convergences to the optimal filter in mean square as the number of particles and the number of Markov chain steps both go to infinity. The convergence proof derives an explicit error bound, showing that the convergence is inverse square-root in both parameters. A numerical example is provided to support the theoretical result.
Original language | English (US) |
---|---|
Article number | 6365849 |
Pages (from-to) | 801-812 |
Number of pages | 12 |
Journal | IEEE Transactions on Signal Processing |
Volume | 61 |
Issue number | 4 |
DOIs | |
State | Published - 2013 |
Keywords
- Bayesian estimation
- Markov chain
- distributed estimation
- optimal filtering
- particle filtering
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering