### Abstract

In this work, we study dynamic programming (DP) algorithms for partially observable Markov decision processes with jointly continuous and discrete state-spaces. We consider a class of stochastic systems which have coupled discrete and continuous systems, where only the continuous state is observable. Such a family of systems includes many realworld systems, for example, Markovian jump linear systems and physical systems interacting with humans. A finite history of observations is used as a new information state, and the convergence of the corresponding DP algorithms is proved. In particular, we prove that the DP iterations converge to a certain bounded set around an optimal solution. Although deterministic DP algorithms are studied in this paper, it is expected that this fundamental work lays foundations for advanced studies on reinforcement learning algorithms under the same family of systems.

Original language | English (US) |
---|---|

Title of host publication | 2019 American Control Conference, ACC 2019 |

Publisher | Institute of Electrical and Electronics Engineers Inc. |

Pages | 1250-1255 |

Number of pages | 6 |

ISBN (Electronic) | 9781538679265 |

State | Published - Jul 2019 |

Externally published | Yes |

Event | 2019 American Control Conference, ACC 2019 - Philadelphia, United States Duration: Jul 10 2019 → Jul 12 2019 |

### Publication series

Name | Proceedings of the American Control Conference |
---|---|

Volume | 2019-July |

ISSN (Print) | 0743-1619 |

### Conference

Conference | 2019 American Control Conference, ACC 2019 |
---|---|

Country | United States |

City | Philadelphia |

Period | 7/10/19 → 7/12/19 |

### Fingerprint

### ASJC Scopus subject areas

- Electrical and Electronic Engineering

### Cite this

*2019 American Control Conference, ACC 2019*(pp. 1250-1255). [8815313] (Proceedings of the American Control Conference; Vol. 2019-July). Institute of Electrical and Electronics Engineers Inc..