Abstract
Shared control can assist a human tele-operator in performing tasks on a remote robot, but also adds complexity in the user interface to allow the user to select the mode of assistance. This letter presents an expert action recommender framework that learns what actions are helpful to accomplish a task, and generates a minimal set of recommendations for display in the user interface. We address the learning problem in an open world context where the action choice depends on an unknown number of objects, i.e., the output domain of the prediction problem changes dynamically. Using structured prediction, we can simultaneously learn what actions to suggest and what objects those actions should act on. In experiments on three tasks in cluttered table-top environments, this method achieves over 90% accuracy in producing the correct suggestion in the top 5 predictions, and also generalizes well to novel tasks with limited training data.
Original language | English (US) |
---|---|
Pages (from-to) | 3099-3105 |
Number of pages | 7 |
Journal | IEEE Robotics and Automation Letters |
Volume | 7 |
Issue number | 2 |
DOIs | |
State | Published - Apr 1 2022 |
Externally published | Yes |
Keywords
- Ear
- End effectors
- Grippers
- History
- Intention Recognition
- Learning from Demonstration
- Robots
- Standards
- Task analysis
- Telerobotics and Teleoperation
- learning from demonstration
- intention recognition
- Telerobotics and teleoperation
ASJC Scopus subject areas
- Mechanical Engineering
- Control and Optimization
- Artificial Intelligence
- Human-Computer Interaction
- Control and Systems Engineering
- Computer Vision and Pattern Recognition
- Biomedical Engineering
- Computer Science Applications