Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs

Yashaswini Murthy, Mehrdad Moharrami, R. Srikant

Research output: Contribution to journalConference articlepeer-review

Abstract

Modified policy iteration (MPI) also known as optimistic policy iteration is at the core of many reinforcement learning algorithms. It works by combining elements of policy iteration and value iteration. The convergence of MPI has been well studied in the case of discounted and average-cost MDPs. In this work, we consider the exponential cost risk-sensitive MDP formulation, which is known to provide some robustness to model parameters. Although policy iteration and value iteration have been well studied in the context of risk sensitive MDPs, modified policy iteration is relatively unexplored. We provide the first proof that MPI also converges for the risk-sensitive problem in the case of finite state and action spaces. Since the exponential cost formulation deals with the multiplicative Bellman equation, our main contribution is a convergence proof which is quite different than existing results for discounted and risk-neutral average-cost problems.

Original languageEnglish (US)
Pages (from-to)395-406
Number of pages12
JournalProceedings of Machine Learning Research
Volume211
StatePublished - 2023
Event5th Annual Conference on Learning for Dynamics and Control, L4DC 2023 - Philadelphia, United States
Duration: Jun 15 2023Jun 16 2023

Keywords

  • Robust stochastic control
  • dynamic programming
  • risk-sensitive stochastic control

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs'. Together they form a unique fingerprint.

Cite this