Vision-based reinforcement learning for robot navigation

W. Zhu, S. Levinson

Research output: Contribution to conferencePaper

Abstract

In this paper, we present a novel vision-based learning approach for autonomous robot navigation. A hybrid state-mapping model, which combines the merits of both static and dynamic state assigning strategies, is proposed to solve the problem of state organization in navigation-learning tasks. Specifically, the continuous feature space, which could be very large in general, is firstly mapped to a small-sized conceptual state space for learning in static. Then, ambiguities among the aliasing states, i.e., the same conceptual state is accidentally mapped to several physical states that require different action policies in reality, are efficiently eliminated in learning with a recursive state-splitting process. The proposed method has been applied to simulate the navigation learning by a simulated robot with very encouraging results.

Original languageEnglish (US)
Pages1025-1030
Number of pages6
StatePublished - Jan 1 2001
EventInternational Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States
Duration: Jul 15 2001Jul 19 2001

Other

OtherInternational Joint Conference on Neural Networks (IJCNN'01)
CountryUnited States
CityWashington, DC
Period7/15/017/19/01

Fingerprint

Reinforcement learning
Navigation
Robots

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Zhu, W., & Levinson, S. (2001). Vision-based reinforcement learning for robot navigation. 1025-1030. Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States.

Vision-based reinforcement learning for robot navigation. / Zhu, W.; Levinson, S.

2001. 1025-1030 Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States.

Research output: Contribution to conferencePaper

Zhu, W & Levinson, S 2001, 'Vision-based reinforcement learning for robot navigation', Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States, 7/15/01 - 7/19/01 pp. 1025-1030.
Zhu W, Levinson S. Vision-based reinforcement learning for robot navigation. 2001. Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States.
Zhu, W. ; Levinson, S. / Vision-based reinforcement learning for robot navigation. Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States.6 p.
@conference{f7a12bbf7e2c47f7877bee10134eac74,
title = "Vision-based reinforcement learning for robot navigation",
abstract = "In this paper, we present a novel vision-based learning approach for autonomous robot navigation. A hybrid state-mapping model, which combines the merits of both static and dynamic state assigning strategies, is proposed to solve the problem of state organization in navigation-learning tasks. Specifically, the continuous feature space, which could be very large in general, is firstly mapped to a small-sized conceptual state space for learning in static. Then, ambiguities among the aliasing states, i.e., the same conceptual state is accidentally mapped to several physical states that require different action policies in reality, are efficiently eliminated in learning with a recursive state-splitting process. The proposed method has been applied to simulate the navigation learning by a simulated robot with very encouraging results.",
author = "W. Zhu and S. Levinson",
year = "2001",
month = "1",
day = "1",
language = "English (US)",
pages = "1025--1030",
note = "International Joint Conference on Neural Networks (IJCNN'01) ; Conference date: 15-07-2001 Through 19-07-2001",

}

TY - CONF

T1 - Vision-based reinforcement learning for robot navigation

AU - Zhu, W.

AU - Levinson, S.

PY - 2001/1/1

Y1 - 2001/1/1

N2 - In this paper, we present a novel vision-based learning approach for autonomous robot navigation. A hybrid state-mapping model, which combines the merits of both static and dynamic state assigning strategies, is proposed to solve the problem of state organization in navigation-learning tasks. Specifically, the continuous feature space, which could be very large in general, is firstly mapped to a small-sized conceptual state space for learning in static. Then, ambiguities among the aliasing states, i.e., the same conceptual state is accidentally mapped to several physical states that require different action policies in reality, are efficiently eliminated in learning with a recursive state-splitting process. The proposed method has been applied to simulate the navigation learning by a simulated robot with very encouraging results.

AB - In this paper, we present a novel vision-based learning approach for autonomous robot navigation. A hybrid state-mapping model, which combines the merits of both static and dynamic state assigning strategies, is proposed to solve the problem of state organization in navigation-learning tasks. Specifically, the continuous feature space, which could be very large in general, is firstly mapped to a small-sized conceptual state space for learning in static. Then, ambiguities among the aliasing states, i.e., the same conceptual state is accidentally mapped to several physical states that require different action policies in reality, are efficiently eliminated in learning with a recursive state-splitting process. The proposed method has been applied to simulate the navigation learning by a simulated robot with very encouraging results.

UR - http://www.scopus.com/inward/record.url?scp=0034870014&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0034870014&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:0034870014

SP - 1025

EP - 1030

ER -