Abstract
Existing neural network-based autonomous systems are shown to be vulnerable against adversarial attacks, therefore sophisticated evaluation of their robustness is of great importance. However, evaluating the robustness under the worst-case scenarios based on known attacks is not comprehensive, not to mention that some of them even rarely occur in the real world. Also, the distribution of safety-critical data is usually multimodal, while most traditional attacks and evaluation methods focus on a single modality. To solve the above challenges, we propose a flow-based multimodal safety-critical scenario generator for evaluating decision-making algorithms. The proposed generative model is optimized with weighted likelihood maximization and a gradient-based sampling procedure is integrated to improve the sampling efficiency. The safety-critical scenarios are generated by efficiently querying the task algorithms and a simulator. Experiments on a self-driving task demonstrate our advantages in terms of testing efficiency and multimodal modeling capability. We evaluate six Reinforcement Learning algorithms with our generated traffic scenarios and provide empirical conclusions about their robustness.
Original language | English (US) |
---|---|
Article number | 9355111 |
Pages (from-to) | 1551-1558 |
Number of pages | 8 |
Journal | IEEE Robotics and Automation Letters |
Volume | 6 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2021 |
Keywords
- Robot safety
- reinforcement learning
- semantic scene understanding
ASJC Scopus subject areas
- Control and Systems Engineering
- Biomedical Engineering
- Human-Computer Interaction
- Mechanical Engineering
- Computer Vision and Pattern Recognition
- Computer Science Applications
- Control and Optimization
- Artificial Intelligence