Vision-based reinforcement learning for robot navigation

W. Zhu, S. Levinson

Research output: Contribution to conferencePaper

Abstract

In this paper, we present a novel vision-based learning approach for autonomous robot navigation. A hybrid state-mapping model, which combines the merits of both static and dynamic state assigning strategies, is proposed to solve the problem of state organization in navigation-learning tasks. Specifically, the continuous feature space, which could be very large in general, is firstly mapped to a small-sized conceptual state space for learning in static. Then, ambiguities among the aliasing states, i.e., the same conceptual state is accidentally mapped to several physical states that require different action policies in reality, are efficiently eliminated in learning with a recursive state-splitting process. The proposed method has been applied to simulate the navigation learning by a simulated robot with very encouraging results.

Original languageEnglish (US)
Pages1025-1030
Number of pages6
StatePublished - Jan 1 2001
EventInternational Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States
Duration: Jul 15 2001Jul 19 2001

Other

OtherInternational Joint Conference on Neural Networks (IJCNN'01)
CountryUnited States
CityWashington, DC
Period7/15/017/19/01

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Vision-based reinforcement learning for robot navigation'. Together they form a unique fingerprint.

  • Cite this

    Zhu, W., & Levinson, S. (2001). Vision-based reinforcement learning for robot navigation. 1025-1030. Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States.