TY - CONF
T1 - EXPLAINING RL DECISIONS WITH TRAJECTORIES
AU - Deshmukh, Shripad
AU - Dasgupta, Arpan
AU - Krishnamurthy, Balaji
AU - Jiang, Nan
AU - Agarwal, Chirag
AU - Theocharous, Georgios
AU - Subramanian, Jayakumar
N1 - We thank anonymous reviewers for their helpful feedback to make this work better. Moreover, NJ acknowledges funding support from NSF IIS-2112471 and NSF CAREER IIS-2141781. Finally, we wish to dedicate this work to the memory of our dear colleague Georgios Theocharous who is not with us anymore. While his premature demise has left an unfillable void, his work has made an indelible mark in the domain of reinforcement learning and in the lives of many researchers. He will forever remain in our memories.
PY - 2023
Y1 - 2023
N2 - Explanation is a key component for the adoption of reinforcement learning (RL) in many real-world decision-making problems. In the literature, the explanation is often provided by saliency attribution to the features of the RL agent's state. In this work, we propose a complementary approach to these explanations, particularly for offline RL, where we attribute the policy decisions of a trained RL agent to the trajectories encountered by it during training. To do so, we encode trajectories in offline training data individually as well as collectively (encoding a set of trajectories). We then attribute policy decisions to a set of trajectories in this encoded space by estimating the sensitivity of the decision with respect to that set. Further, we demonstrate the effectiveness of the proposed approach in terms of quality of attributions as well as practical scalability in diverse environments that involve both discrete and continuous state and action spaces such as grid-worlds, video games (Atari) and continuous control (MuJoCo). We also conduct a human study on a simple navigation task to observe how their understanding of the task compares with data attributed for a trained RL policy.
AB - Explanation is a key component for the adoption of reinforcement learning (RL) in many real-world decision-making problems. In the literature, the explanation is often provided by saliency attribution to the features of the RL agent's state. In this work, we propose a complementary approach to these explanations, particularly for offline RL, where we attribute the policy decisions of a trained RL agent to the trajectories encountered by it during training. To do so, we encode trajectories in offline training data individually as well as collectively (encoding a set of trajectories). We then attribute policy decisions to a set of trajectories in this encoded space by estimating the sensitivity of the decision with respect to that set. Further, we demonstrate the effectiveness of the proposed approach in terms of quality of attributions as well as practical scalability in diverse environments that involve both discrete and continuous state and action spaces such as grid-worlds, video games (Atari) and continuous control (MuJoCo). We also conduct a human study on a simple navigation task to observe how their understanding of the task compares with data attributed for a trained RL policy.
UR - http://www.scopus.com/inward/record.url?scp=85197649529&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85197649529&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85197649529
T2 - 11th International Conference on Learning Representations, ICLR 2023
Y2 - 1 May 2023 through 5 May 2023
ER -