TY - GEN
T1 - Policy Finetuning
T2 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
AU - Xie, Tengyang
AU - Jiang, Nan
AU - Wang, Huan
AU - Xiong, Caiming
AU - Bai, Yu
N1 - Publisher Copyright:
© 2021 Neural information processing systems foundation. All rights reserved.
PY - 2021
Y1 - 2021
N2 - Recent theoretical work studies sample-efficient reinforcement learning (RL) extensively in two settings: learning interactively in the environment (online RL), or learning from an offline dataset (offline RL). However, existing algorithms and theories for learning near-optimal policies in these two settings are rather different and disconnected. Towards bridging this gap, this paper initiates the theoretical study of policy finetuning, that is, online RL where the learner has additional access to a “reference policy” µ close to the optimal policy π* in a certain sense. We consider the policy finetuning problem in episodic Markov Decision Processes (MDPs) with S states, A actions, and horizon length H. We first design a sharp offline reduction algorithm-which simply executes µ and runs offline policy optimization on the collected dataset-that finds an ε near-optimal policy within Oe(H3SC*/ε2) episodes, where C* is the single-policy concentrability coefficient between µ and π*. This offline result is the first that matches the sample complexity lower bound in this setting, and resolves a recent open question in offline RL. We then establish an Ω(H3S min{C*, A}/ε2) sample complexity lower bound for any policy finetuning algorithm, including those that can adaptively explore the environment. This implies that-perhaps surprisingly-the optimal policy finetuning algorithm is either offline reduction or a purely online RL algorithm that does not use µ. Finally, we design a new hybrid offline/online algorithm for policy finetuning that achieves better sample complexity than both vanilla offline reduction and purely online RL algorithms, in a relaxed setting where µ only satisfies concentrability partially up to a certain time step. Overall, our results offer a quantitative understanding on the benefit of a good reference policy, and make a step towards bridging offline and online RL.
AB - Recent theoretical work studies sample-efficient reinforcement learning (RL) extensively in two settings: learning interactively in the environment (online RL), or learning from an offline dataset (offline RL). However, existing algorithms and theories for learning near-optimal policies in these two settings are rather different and disconnected. Towards bridging this gap, this paper initiates the theoretical study of policy finetuning, that is, online RL where the learner has additional access to a “reference policy” µ close to the optimal policy π* in a certain sense. We consider the policy finetuning problem in episodic Markov Decision Processes (MDPs) with S states, A actions, and horizon length H. We first design a sharp offline reduction algorithm-which simply executes µ and runs offline policy optimization on the collected dataset-that finds an ε near-optimal policy within Oe(H3SC*/ε2) episodes, where C* is the single-policy concentrability coefficient between µ and π*. This offline result is the first that matches the sample complexity lower bound in this setting, and resolves a recent open question in offline RL. We then establish an Ω(H3S min{C*, A}/ε2) sample complexity lower bound for any policy finetuning algorithm, including those that can adaptively explore the environment. This implies that-perhaps surprisingly-the optimal policy finetuning algorithm is either offline reduction or a purely online RL algorithm that does not use µ. Finally, we design a new hybrid offline/online algorithm for policy finetuning that achieves better sample complexity than both vanilla offline reduction and purely online RL algorithms, in a relaxed setting where µ only satisfies concentrability partially up to a certain time step. Overall, our results offer a quantitative understanding on the benefit of a good reference policy, and make a step towards bridging offline and online RL.
UR - http://www.scopus.com/inward/record.url?scp=85130111690&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85130111690&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85130111690
T3 - Advances in Neural Information Processing Systems
SP - 27395
EP - 27407
BT - Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
A2 - Ranzato, Marc'Aurelio
A2 - Beygelzimer, Alina
A2 - Dauphin, Yann
A2 - Liang, Percy S.
A2 - Wortman Vaughan, Jenn
PB - Neural information processing systems foundation
Y2 - 6 December 2021 through 14 December 2021
ER -