Learning Navigation Subroutines from Egocentric Videos

Ashish Kumar, Saurabh Gupta, Jitendra Malik

Research output: Contribution to journalConference articlepeer-review


Planning at a higher level of abstraction instead of low level torques improves the sample efficiency in reinforcement learning, and computational efficiency in classical planning. We propose a method to learn such hierarchical abstractions, or subroutines from egocentric video data of experts performing tasks. We learn a self-supervised inverse model on small amounts of random interaction data to pseudo-label the expert egocentric videos with agent actions. Visuomotor subroutines are acquired from these pseudo-labeled videos by learning a latent intent-conditioned policy that predicts the inferred pseudo-actions from the corresponding image observations. We demonstrate our proposed approach in context of navigation, and show that we can successfully learn consistent and diverse visuomotor subroutines from passive egocentric videos. We demonstrate the utility of our acquired visuomotor subroutines by using them as is for exploration, and as sub-policies in a hierarchical RL framework for reaching point goals and semantic goals. We also demonstrate behavior of our subroutines in the real world, by deploying them on a real robotic platform. Project website: https://ashishkumar1993.github.io/subroutines/.

Original languageEnglish (US)
Pages (from-to)617-626
Number of pages10
JournalProceedings of Machine Learning Research
StatePublished - 2019
Externally publishedYes
Event3rd Conference on Robot Learning, CoRL 2019 - Osaka, Japan
Duration: Oct 30 2019Nov 1 2019


  • Hierarchical Reinforcement Learning
  • Passive Data
  • Subroutines

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability


Dive into the research topics of 'Learning Navigation Subroutines from Egocentric Videos'. Together they form a unique fingerprint.

Cite this