TY - JOUR
T1 - Multivariable feedback particle filter
AU - Yang, Tao
AU - Laugesen, Richard S.
AU - Mehta, Prashant G.
AU - Meyn, Sean P.
N1 - Funding Information:
Financial support from the NSF grants 1334987 and 1462773 , and the Simons foundation grant 204296 is gratefully acknowledged. The material in this paper was presented at the 51st IEEE Conference on Decision and Control, December 10–13, 2012, Maui, HI, USA. This paper was recommended for publication in revised form by Associate Editor Valery Ugrinovskii under the direction of Editor Ian R. Petersen. The conference version of this paper appeared in Yang, Laugesen, Mehta, and Meyn (2012) .
PY - 2012
Y1 - 2012
N2 - In recent work it is shown that importance sampling can be avoided in the particle filter through an innovation structure inspired by traditional nonlinear filtering combined with Mean-Field Game formalisms [9], [19]. The resulting feedback particle filter (FPF) offers significant variance improvements; in particular, the algorithm can be applied to systems that are not stable. The filter comes with an up-front computational cost to obtain the filter gain. This paper describes new representations and algorithms to compute the gain in the general multivariable setting. The main contributions are, (i) Theory surrounding the FPF is improved: Consistency is established in the multivariate setting, as well as well-posedness of the associated PDE to obtain the filter gain. (ii) The gain can be expressed as the gradient of a function, which is precisely the solution to Poisson's equation for a related MCMC diffusion (the Smoluchowski equation). This provides a bridge to MCMC as well as to approximate optimal filtering approaches such as TD-learning, which can in turn be used to approximate the gain. (iii) Motivated by a weak formulation of Poisson's equation, a Galerkin finite-element algorithm is proposed for approximation of the gain. Its performance is illustrated in numerical experiments.
AB - In recent work it is shown that importance sampling can be avoided in the particle filter through an innovation structure inspired by traditional nonlinear filtering combined with Mean-Field Game formalisms [9], [19]. The resulting feedback particle filter (FPF) offers significant variance improvements; in particular, the algorithm can be applied to systems that are not stable. The filter comes with an up-front computational cost to obtain the filter gain. This paper describes new representations and algorithms to compute the gain in the general multivariable setting. The main contributions are, (i) Theory surrounding the FPF is improved: Consistency is established in the multivariate setting, as well as well-posedness of the associated PDE to obtain the filter gain. (ii) The gain can be expressed as the gradient of a function, which is precisely the solution to Poisson's equation for a related MCMC diffusion (the Smoluchowski equation). This provides a bridge to MCMC as well as to approximate optimal filtering approaches such as TD-learning, which can in turn be used to approximate the gain. (iii) Motivated by a weak formulation of Poisson's equation, a Galerkin finite-element algorithm is proposed for approximation of the gain. Its performance is illustrated in numerical experiments.
UR - http://www.scopus.com/inward/record.url?scp=84874276466&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84874276466&partnerID=8YFLogxK
U2 - 10.1109/CDC.2012.6425937
DO - 10.1109/CDC.2012.6425937
M3 - Conference article
AN - SCOPUS:84874276466
SN - 0743-1546
SP - 4063
EP - 4070
JO - Proceedings of the IEEE Conference on Decision and Control
JF - Proceedings of the IEEE Conference on Decision and Control
M1 - 6425937
T2 - 51st IEEE Conference on Decision and Control, CDC 2012
Y2 - 10 December 2012 through 13 December 2012
ER -