Abstract
We investigate the problem of corruption robustness in offline reinforcement learning (RL) with general function approximation, where an adversary can corrupt each sample in the offline dataset, and the corruption level ς ≥ 0 quantifies the cumulative corruption amount over n episodes and H steps. Our goal is to find a policy that is robust to such corruption and minimizes the suboptimality gap with respect to the optimal policy for the uncorrupted Markov decision processes (MDPs). Drawing inspiration from the uncertainty-weighting technique from the robust online RL setting [18, 55], we design a new uncertainty weight iteration procedure to efficiently compute on batched samples and propose a corruption-robust algorithm for offline RL. Notably, under the assumption of single policy coverage and the knowledge of ς, our proposed algorithm achieves a suboptimality bound that is worsened by an additive factor of O(ς · (CC(λ, Fb, ZnH)1/2(C(Fb, µ)-1/2n-1) due to the corruption. Here CC(λ, Fb, ZnH) is the coverage coefficient that depends on the regularization parameter λ, the confidence set Fb, and the dataset ZnH, and C(Fb, µ) is a coefficient that depends on Fb and the underlying data distribution µ. When specialized to linear MDPs, the corruption-dependent error term reduces to O(ςdn-1) with d being the dimension of the feature map, which matches the existing lower bound for corrupted linear MDPs. This suggests that our analysis is tight in terms of the corruption-dependent term.
Original language | English (US) |
---|---|
Journal | Advances in Neural Information Processing Systems |
Volume | 36 |
State | Published - 2023 |
Externally published | Yes |
Event | 37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States Duration: Dec 10 2023 → Dec 16 2023 |
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Signal Processing