Abstract
Nash Q-learning may be considered one of the first and most known algorithms in multi-agent reinforcement learning (MARL) for learning policies that constitute a Nash equilibrium of an underlying general-sum Markov game. Its original proof provided asymptotic guarantees and was for the tabular case. Recently, finite-sample guarantees have been provided using more modern RL techniques for the tabular case. Our work analyzes Nash Q-learning using linear function approximation - a representation regime introduced when the state space is large or continuous - and provides finite-sample guarantees that indicate its sample efficiency. We find that the obtained performance nearly matches an existing efficient result for single-agent RL under the same representation and has a polynomial gap when compared to the best-known result for the tabular case.
Original language | English (US) |
---|---|
Pages (from-to) | 424-432 |
Number of pages | 9 |
Journal | Proceedings of Machine Learning Research |
Volume | 216 |
State | Published - 2023 |
Externally published | Yes |
Event | 39th Conference on Uncertainty in Artificial Intelligence, UAI 2023 - Pittsburgh, United States Duration: Jul 31 2023 → Aug 4 2023 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability