Abstract
In this paper, we study the convergence properties of off-policy policy optimization algorithms with state-action density ratio correction under function approximation setting, where the objective function is formulated as a max-max-min problem. We first clearly characterize the bias of the learning objective, and then present two strategies with finite-time convergence guarantees. In our first strategy, we propose an algorithm called P-SREDA with convergence rate O(ε-3), whose dependency on ε is optimal. Besides, in our second strategy, we design a new off-policy actor-critic style algorithm named O-SPIM. We prove that O-SPIM converges to a stationary point with total complexity O(ε-4), which matches the convergence rate of some recent actor-critic algorithms under on-policy setting.
Original language | English (US) |
---|---|
Pages (from-to) | 2658-2705 |
Number of pages | 48 |
Journal | Proceedings of Machine Learning Research |
Volume | 151 |
State | Published - 2022 |
Event | 25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022 - Virtual, Online, Spain Duration: Mar 28 2022 → Mar 30 2022 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability