Abstract
Planning at a higher level of abstraction instead of low level torques improves the sample efficiency in reinforcement learning, and computational efficiency in classical planning. We propose a method to learn such hierarchical abstractions, or subroutines from egocentric video data of experts performing tasks. We learn a self-supervised inverse model on small amounts of random interaction data to pseudo-label the expert egocentric videos with agent actions. Visuomotor subroutines are acquired from these pseudo-labeled videos by learning a latent intent-conditioned policy that predicts the inferred pseudo-actions from the corresponding image observations. We demonstrate our proposed approach in context of navigation, and show that we can successfully learn consistent and diverse visuomotor subroutines from passive egocentric videos. We demonstrate the utility of our acquired visuomotor subroutines by using them as is for exploration, and as sub-policies in a hierarchical RL framework for reaching point goals and semantic goals. We also demonstrate behavior of our subroutines in the real world, by deploying them on a real robotic platform. Project website: https://ashishkumar1993.github.io/subroutines/.
Original language | English (US) |
---|---|
Pages (from-to) | 617-626 |
Number of pages | 10 |
Journal | Proceedings of Machine Learning Research |
Volume | 100 |
State | Published - 2019 |
Externally published | Yes |
Event | 3rd Conference on Robot Learning, CoRL 2019 - Osaka, Japan Duration: Oct 30 2019 → Nov 1 2019 |
Keywords
- Hierarchical Reinforcement Learning
- Passive Data
- Subroutines
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability