Abstract
In this paper we present results from a study on the performance of humans and automatic controllers in a general remote navigation task. The remote navigation task is defined as driving a vehicle with nonholonomic kinematic constraints around obstacles toward a goal. We conducted experiments with humans and automatic controllers; in these experiments, the number and type of obstacles as well as the feedback delay was varied. Humans showed significantly more robust performance compared to that of a receding horizon controller. Using the human data, we then train a new human-like receding horizon controller which provides goal convergence when there is no uncertainty. We show that paths produced by the trained human-like controller are similar to human paths and that the trained controller improves robustness compared to the original receding horizon controller.
Original language | English (US) |
---|---|
Pages (from-to) | 44-63 |
Number of pages | 20 |
Journal | Paladyn |
Volume | 2 |
Issue number | 1 |
DOIs | |
State | Published - Mar 1 2011 |
Keywords
- automatic obstacle avoidance
- human automata interactions
- human obstacle avoidance
- learning human behavior
- receding horizon control
- remote navigation
- time delay
ASJC Scopus subject areas
- Human-Computer Interaction
- Developmental Neuroscience
- Cognitive Neuroscience
- Artificial Intelligence
- Behavioral Neuroscience