Learning belief representations for imitation learning in POMDPs

Tanmay Gangwani, Joel Lehman, Qiang Liu, Jian Peng

Research output: Contribution to conferencePaper

Abstract

We consider the problem of imitation learning from expert demonstrations in partially observable Markov decision processes (POMDPs). Belief representations, which characterize the distribution over the latent states in a POMDP, have been modeled using recurrent neural networks and probabilistic latent variable models, and shown to be effective for reinforcement learning in POMDPs. In this work, we investigate the belief representation learning problem for generative adversarial imitation learning in POMDPs. Instead of training the belief module and the policy separately as suggested in prior work, we learn the belief module jointly with the policy, using a task-aware imitation loss to ensure that the representation is more aligned with the policy’s objective. To improve robustness of representation, we introduce several informative belief regularization techniques, including multi-step prediction of dynamics and action-sequences. Evaluated on various partially observable continuous-control locomotion tasks, our belief-module imitation learning approach (BMIL) substantially outperforms several baselines, including the original GAIL algorithm and the task-agnostic belief learning algorithm. Extensive ablation analysis indicates the effectiveness of task-aware belief learning and belief regularization. Code for the project is available online1.

Original languageEnglish (US)
StatePublished - Jan 1 2019
Event35th Conference on Uncertainty in Artificial Intelligence, UAI 2019 - Tel Aviv, Israel
Duration: Jul 22 2019Jul 25 2019

Conference

Conference35th Conference on Uncertainty in Artificial Intelligence, UAI 2019
CountryIsrael
CityTel Aviv
Period7/22/197/25/19

Fingerprint

Recurrent neural networks
Reinforcement learning
Ablation
Learning algorithms
Demonstrations

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Gangwani, T., Lehman, J., Liu, Q., & Peng, J. (2019). Learning belief representations for imitation learning in POMDPs. Paper presented at 35th Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel.

Learning belief representations for imitation learning in POMDPs. / Gangwani, Tanmay; Lehman, Joel; Liu, Qiang; Peng, Jian.

2019. Paper presented at 35th Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel.

Research output: Contribution to conferencePaper

Gangwani, T, Lehman, J, Liu, Q & Peng, J 2019, 'Learning belief representations for imitation learning in POMDPs' Paper presented at 35th Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel, 7/22/19 - 7/25/19, .
Gangwani T, Lehman J, Liu Q, Peng J. Learning belief representations for imitation learning in POMDPs. 2019. Paper presented at 35th Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel.
Gangwani, Tanmay ; Lehman, Joel ; Liu, Qiang ; Peng, Jian. / Learning belief representations for imitation learning in POMDPs. Paper presented at 35th Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel.
@conference{19d45d6776e64493a9460c6fe2d4664c,
title = "Learning belief representations for imitation learning in POMDPs",
abstract = "We consider the problem of imitation learning from expert demonstrations in partially observable Markov decision processes (POMDPs). Belief representations, which characterize the distribution over the latent states in a POMDP, have been modeled using recurrent neural networks and probabilistic latent variable models, and shown to be effective for reinforcement learning in POMDPs. In this work, we investigate the belief representation learning problem for generative adversarial imitation learning in POMDPs. Instead of training the belief module and the policy separately as suggested in prior work, we learn the belief module jointly with the policy, using a task-aware imitation loss to ensure that the representation is more aligned with the policy’s objective. To improve robustness of representation, we introduce several informative belief regularization techniques, including multi-step prediction of dynamics and action-sequences. Evaluated on various partially observable continuous-control locomotion tasks, our belief-module imitation learning approach (BMIL) substantially outperforms several baselines, including the original GAIL algorithm and the task-agnostic belief learning algorithm. Extensive ablation analysis indicates the effectiveness of task-aware belief learning and belief regularization. Code for the project is available online1.",
author = "Tanmay Gangwani and Joel Lehman and Qiang Liu and Jian Peng",
year = "2019",
month = "1",
day = "1",
language = "English (US)",
note = "35th Conference on Uncertainty in Artificial Intelligence, UAI 2019 ; Conference date: 22-07-2019 Through 25-07-2019",

}

TY - CONF

T1 - Learning belief representations for imitation learning in POMDPs

AU - Gangwani, Tanmay

AU - Lehman, Joel

AU - Liu, Qiang

AU - Peng, Jian

PY - 2019/1/1

Y1 - 2019/1/1

N2 - We consider the problem of imitation learning from expert demonstrations in partially observable Markov decision processes (POMDPs). Belief representations, which characterize the distribution over the latent states in a POMDP, have been modeled using recurrent neural networks and probabilistic latent variable models, and shown to be effective for reinforcement learning in POMDPs. In this work, we investigate the belief representation learning problem for generative adversarial imitation learning in POMDPs. Instead of training the belief module and the policy separately as suggested in prior work, we learn the belief module jointly with the policy, using a task-aware imitation loss to ensure that the representation is more aligned with the policy’s objective. To improve robustness of representation, we introduce several informative belief regularization techniques, including multi-step prediction of dynamics and action-sequences. Evaluated on various partially observable continuous-control locomotion tasks, our belief-module imitation learning approach (BMIL) substantially outperforms several baselines, including the original GAIL algorithm and the task-agnostic belief learning algorithm. Extensive ablation analysis indicates the effectiveness of task-aware belief learning and belief regularization. Code for the project is available online1.

AB - We consider the problem of imitation learning from expert demonstrations in partially observable Markov decision processes (POMDPs). Belief representations, which characterize the distribution over the latent states in a POMDP, have been modeled using recurrent neural networks and probabilistic latent variable models, and shown to be effective for reinforcement learning in POMDPs. In this work, we investigate the belief representation learning problem for generative adversarial imitation learning in POMDPs. Instead of training the belief module and the policy separately as suggested in prior work, we learn the belief module jointly with the policy, using a task-aware imitation loss to ensure that the representation is more aligned with the policy’s objective. To improve robustness of representation, we introduce several informative belief regularization techniques, including multi-step prediction of dynamics and action-sequences. Evaluated on various partially observable continuous-control locomotion tasks, our belief-module imitation learning approach (BMIL) substantially outperforms several baselines, including the original GAIL algorithm and the task-agnostic belief learning algorithm. Extensive ablation analysis indicates the effectiveness of task-aware belief learning and belief regularization. Code for the project is available online1.

UR - http://www.scopus.com/inward/record.url?scp=85073263443&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85073263443&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85073263443

ER -