TY - GEN
T1 - Projected Stochastic Primal-Dual Method for Constrained Online Learning with Kernels
AU - Zhang, Kaiqing
AU - Zhu, Hao
AU - Basar, Tamer
AU - Koppel, Alec
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - We consider the problem of stochastic optimization with nonlinear constraints, where the decision variable is not vector-valued but instead a function belonging to a reproducing Kernel Hilbert Space (RKHS). Currently, there exist solutions to only special cases of this problem. To solve this constrained problem with kernels, we first generalize the Representer Theorem to a class of saddle-point problems defined over RKHS. Furthermore, we develop a primal-dual method which executes alternating projected primal/dual stochastic descent/ascent on the dual-augmented Lagrangian of this problem. The primal projection sets are low-dimensional subspaces of the ambient function space which are greedily constructed using matching pursuit. By tuning the projection-induced error to the algorithm step-size, we are able to establish mean convergence both in primal objective sub-optimality and constraint violation, respectively to the \mathcal{O}(\sqrt{T}) and \mathcal{O}(T^{3/4}) neighborhoods, where T is the total number of iterations. We evaluate the proposed method through numerical tests for the application of risk-aware supervised learning.
AB - We consider the problem of stochastic optimization with nonlinear constraints, where the decision variable is not vector-valued but instead a function belonging to a reproducing Kernel Hilbert Space (RKHS). Currently, there exist solutions to only special cases of this problem. To solve this constrained problem with kernels, we first generalize the Representer Theorem to a class of saddle-point problems defined over RKHS. Furthermore, we develop a primal-dual method which executes alternating projected primal/dual stochastic descent/ascent on the dual-augmented Lagrangian of this problem. The primal projection sets are low-dimensional subspaces of the ambient function space which are greedily constructed using matching pursuit. By tuning the projection-induced error to the algorithm step-size, we are able to establish mean convergence both in primal objective sub-optimality and constraint violation, respectively to the \mathcal{O}(\sqrt{T}) and \mathcal{O}(T^{3/4}) neighborhoods, where T is the total number of iterations. We evaluate the proposed method through numerical tests for the application of risk-aware supervised learning.
UR - http://www.scopus.com/inward/record.url?scp=85062179503&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85062179503&partnerID=8YFLogxK
U2 - 10.1109/CDC.2018.8619068
DO - 10.1109/CDC.2018.8619068
M3 - Conference contribution
AN - SCOPUS:85062179503
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 4224
EP - 4231
BT - 2018 IEEE Conference on Decision and Control, CDC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 57th IEEE Conference on Decision and Control, CDC 2018
Y2 - 17 December 2018 through 19 December 2018
ER -