Causal explanation for reinforcement learning: quantifying state and temporal importance

Xiaoxiao Wang, Fanyu Meng, Xin Liu, Zhaodan Kong, Xin Chen

Research output: Contribution to journalArticlepeer-review

Abstract

Explainability plays an increasingly important role in machine learning. Because reinforcement learning (RL) involves interactions between states and actions over time, it’s more challenging to explain an RL policy than supervised learning. Furthermore, humans view the world through a causal lens and thus prefer causal explanations over associational ones. Therefore, in this paper, we develop a causal explanation mechanism that quantifies the causal importance of states on actions and such importance over time. We also demonstrate the advantages of our mechanism over state-of-the-art associational methods in terms of RL policy explanation through a series of simulation studies, including crop irrigation, Blackjack, collision avoidance, and lunar lander.

Original languageEnglish (US)
Pages (from-to)22546-22564
Number of pages19
JournalApplied Intelligence
Volume53
Issue number19
DOIs
StatePublished - Oct 2023
Externally publishedYes

Keywords

  • Causal
  • Explainability
  • Reinforcement learning
  • Temporal importance

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Causal explanation for reinforcement learning: quantifying state and temporal importance'. Together they form a unique fingerprint.

Cite this