Abstract
In this paper, we present a novel vision-based learning approach for autonomous robot navigation. A hybrid state-mapping model, which combines the merits of both static and dynamic state assigning strategies, is proposed to solve the problem of state organization in navigation-learning tasks. Specifically, the continuous feature space, which could be very large in general, is firstly mapped to a small-sized conceptual state space for learning in static. Then, ambiguities among the aliasing states, i.e., the same conceptual state is accidentally mapped to several physical states that require different action policies in reality, are efficiently eliminated in learning with a recursive state-splitting process. The proposed method has been applied to simulate the navigation learning by a simulated robot with very encouraging results.
Original language | English (US) |
---|---|
Pages | 1025-1030 |
Number of pages | 6 |
State | Published - 2001 |
Event | International Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States Duration: Jul 15 2001 → Jul 19 2001 |
Other
Other | International Joint Conference on Neural Networks (IJCNN'01) |
---|---|
Country/Territory | United States |
City | Washington, DC |
Period | 7/15/01 → 7/19/01 |
ASJC Scopus subject areas
- Software
- Artificial Intelligence