TY - GEN
T1 - Near-Optimal Model-Free Reinforcement Learning in Non-Stationary Episodic MDPs
AU - Mao, Weichao
AU - Zhang, Kaiqing
AU - Zhu, Ruihao
AU - Simchi-Levi, David
AU - Başar, Tamer
N1 - Publisher Copyright:
Copyright © 2021 by the author(s)
PY - 2021
Y1 - 2021
N2 - We consider model-free reinforcement learning (RL) in non-stationary Markov decision processes. Both the reward functions and the state transition functions are allowed to vary arbitrarily over time as long as their cumulative variations do not exceed certain variation budgets. We propose Restarted Q-Learning with Upper Confidence Bounds (RestartQ-UCB), the first model-free algorithm for non-stationary RL, and show that it outperforms existing solutions in terms of dynamic regret. Specifically, RestartQ-UCB with Freedman-type bonus terms achieves a dynamic regret bound of Oe(S 1 3 A3 1 ∆ 1 3 HT 2 3 ), where S and A are the numbers of states and actions, respectively, ∆ > 0 is the variation budget, H is the number of time steps per episode, and T is the total number of time steps. We further show that our algorithm is nearly optimal by establishing an information-theoretical lower bound of Ω(S 3 1 A1 3 ∆ 1 3 H 2 3 T 3 2 ), the first lower bound in non-stationary RL. Numerical experiments validate the advantages of RestartQ-UCB in terms of both cumulative rewards and computational efficiency. We further demonstrate the power of our results in the context of multi-agent RL, where non-stationarity is a key challenge.
AB - We consider model-free reinforcement learning (RL) in non-stationary Markov decision processes. Both the reward functions and the state transition functions are allowed to vary arbitrarily over time as long as their cumulative variations do not exceed certain variation budgets. We propose Restarted Q-Learning with Upper Confidence Bounds (RestartQ-UCB), the first model-free algorithm for non-stationary RL, and show that it outperforms existing solutions in terms of dynamic regret. Specifically, RestartQ-UCB with Freedman-type bonus terms achieves a dynamic regret bound of Oe(S 1 3 A3 1 ∆ 1 3 HT 2 3 ), where S and A are the numbers of states and actions, respectively, ∆ > 0 is the variation budget, H is the number of time steps per episode, and T is the total number of time steps. We further show that our algorithm is nearly optimal by establishing an information-theoretical lower bound of Ω(S 3 1 A1 3 ∆ 1 3 H 2 3 T 3 2 ), the first lower bound in non-stationary RL. Numerical experiments validate the advantages of RestartQ-UCB in terms of both cumulative rewards and computational efficiency. We further demonstrate the power of our results in the context of multi-agent RL, where non-stationarity is a key challenge.
UR - http://www.scopus.com/inward/record.url?scp=85161337063&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85161337063&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85161337063
T3 - Proceedings of Machine Learning Research
SP - 7447
EP - 7458
BT - Proceedings of the 38th International Conference on Machine Learning, ICML 2021
PB - ML Research Press
T2 - 38th International Conference on Machine Learning, ICML 2021
Y2 - 18 July 2021 through 24 July 2021
ER -