TY - GEN
T1 - Dynamic programming for POMDP with jointly discrete and continuous state-spaces
AU - Lee, Donghwan
AU - He, Niao
AU - Hu, Jianghai
N1 - Funding Information:
This material is based upon work supported by the National Science Foundation under Grant No. 1539527 D. Lee and N. He are with Coordinated Science Laboratory (CSL), University of Illinois at Urbana-Champaign, IL 61801, USA donghwan@illinois.edu, niaohe@illinois.edu.
PY - 2019/7
Y1 - 2019/7
N2 - In this work, we study dynamic programming (DP) algorithms for partially observable Markov decision processes with jointly continuous and discrete state-spaces. We consider a class of stochastic systems which have coupled discrete and continuous systems, where only the continuous state is observable. Such a family of systems includes many realworld systems, for example, Markovian jump linear systems and physical systems interacting with humans. A finite history of observations is used as a new information state, and the convergence of the corresponding DP algorithms is proved. In particular, we prove that the DP iterations converge to a certain bounded set around an optimal solution. Although deterministic DP algorithms are studied in this paper, it is expected that this fundamental work lays foundations for advanced studies on reinforcement learning algorithms under the same family of systems.
AB - In this work, we study dynamic programming (DP) algorithms for partially observable Markov decision processes with jointly continuous and discrete state-spaces. We consider a class of stochastic systems which have coupled discrete and continuous systems, where only the continuous state is observable. Such a family of systems includes many realworld systems, for example, Markovian jump linear systems and physical systems interacting with humans. A finite history of observations is used as a new information state, and the convergence of the corresponding DP algorithms is proved. In particular, we prove that the DP iterations converge to a certain bounded set around an optimal solution. Although deterministic DP algorithms are studied in this paper, it is expected that this fundamental work lays foundations for advanced studies on reinforcement learning algorithms under the same family of systems.
UR - http://www.scopus.com/inward/record.url?scp=85072293118&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85072293118&partnerID=8YFLogxK
U2 - 10.23919/acc.2019.8815313
DO - 10.23919/acc.2019.8815313
M3 - Conference contribution
AN - SCOPUS:85072293118
T3 - Proceedings of the American Control Conference
SP - 1250
EP - 1255
BT - 2019 American Control Conference, ACC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 American Control Conference, ACC 2019
Y2 - 10 July 2019 through 12 July 2019
ER -