Abstract
Modified policy iteration (MPI) also known as optimistic policy iteration is at the core of many reinforcement learning algorithms. It works by combining elements of policy iteration and value iteration. The convergence of MPI has been well studied in the case of discounted and average-cost MDPs. In this work, we consider the exponential cost risk-sensitive MDP formulation, which is known to provide some robustness to model parameters. Although policy iteration and value iteration have been well studied in the context of risk sensitive MDPs, modified policy iteration is relatively unexplored. We provide the first proof that MPI also converges for the risk-sensitive problem in the case of finite state and action spaces. Since the exponential cost formulation deals with the multiplicative Bellman equation, our main contribution is a convergence proof which is quite different than existing results for discounted and risk-neutral average-cost problems.
Original language | English (US) |
---|---|
Pages (from-to) | 395-406 |
Number of pages | 12 |
Journal | Proceedings of Machine Learning Research |
Volume | 211 |
State | Published - 2023 |
Event | 5th Annual Conference on Learning for Dynamics and Control, L4DC 2023 - Philadelphia, United States Duration: Jun 15 2023 → Jun 16 2023 |
Keywords
- Robust stochastic control
- dynamic programming
- risk-sensitive stochastic control
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability