TY - JOUR
T1 - The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation
AU - Amortila, Philip
AU - Jiang, Nan
AU - Szepesvári, Csaba
N1 - PA gratefully acknowledges funding from the Natural Sciences and Engineering Research Council (NSERC) of Canada. NJ acknowledges funding support from NSF IIS-2112471 and NSF CAREER IIS-2141781. CS gratefully acknowledges funding from NSERC and the Canada CIFAR AI Chairs Program through Amii.
PY - 2023
Y1 - 2023
N2 - Theoretical guarantees in reinforcement learning (RL) are known to suffer multiplicative blow-up factors with respect to the misspecification error of function approximation. Yet, the nature of such approximation factors'especially their optimal form in a given learning problem'is poorly understood. In this paper we study this question in linear off-policy value function estimation, where many open questions remain. We study the approximation factor in a broad spectrum of settings, such as presence vs. absence of state aliasing and full vs. partial coverage of the state space. Our core results include instance-dependent upper bounds on the approximation factors with respect to both the weighted L2-norm (where the weighting is the offline state distribution) and the L∞ norm. We show that these approximation factors are optimal (in an instance-dependent sense) for a number of these settings. In other cases, we show that the instance-dependent parameters which appear in the upper bounds are necessary, and that the finiteness of either alone cannot guarantee a finite approximation factor even in the limit of infinite data.
AB - Theoretical guarantees in reinforcement learning (RL) are known to suffer multiplicative blow-up factors with respect to the misspecification error of function approximation. Yet, the nature of such approximation factors'especially their optimal form in a given learning problem'is poorly understood. In this paper we study this question in linear off-policy value function estimation, where many open questions remain. We study the approximation factor in a broad spectrum of settings, such as presence vs. absence of state aliasing and full vs. partial coverage of the state space. Our core results include instance-dependent upper bounds on the approximation factors with respect to both the weighted L2-norm (where the weighting is the offline state distribution) and the L∞ norm. We show that these approximation factors are optimal (in an instance-dependent sense) for a number of these settings. In other cases, we show that the instance-dependent parameters which appear in the upper bounds are necessary, and that the finiteness of either alone cannot guarantee a finite approximation factor even in the limit of infinite data.
UR - http://www.scopus.com/inward/record.url?scp=85174402245&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85174402245&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85174402245
SN - 2640-3498
VL - 202
SP - 768
EP - 790
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 40th International Conference on Machine Learning, ICML 2023
Y2 - 23 July 2023 through 29 July 2023
ER -