Abstract
Sample efficiency and scalability to a large number of agents are two important goals for multi-agent reinforcement learning systems. Recent works got us closer to those goals, addressing non-stationarity of the environment from a single agent’s perspective by utilizing a deep net critic which depends on all observations and actions. The critic input concatenates agent observations and actions in a user-specified order. However, since deep nets aren’t permutation invariant, a permuted input changes the critic output despite the environment remaining identical. To avoid this inefficiency, we propose a ‘permutation invariant critic’ (PIC), which yields identical output irrespective of the agent permutation. This consistent representation enables our model to scale to 30 times more agents and to achieve improvements of test episode reward between 15% to 50% on the challenging multi-agent particle environment (MPE).
Original language | English (US) |
---|---|
Pages (from-to) | 590-602 |
Number of pages | 13 |
Journal | Proceedings of Machine Learning Research |
Volume | 100 |
State | Published - 2019 |
Event | 3rd Conference on Robot Learning, CoRL 2019 - Osaka, Japan Duration: Oct 30 2019 → Nov 1 2019 |
Keywords
- Graph Neural Network
- Multi-agent Reinforcement Learning
- Permutation Invariance
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability