Abstract
We consider the problem of stochastic optimization with nonlinear constraints, where the decision variable is not vector-valued but instead a function belonging to a reproducing Kernel Hilbert Space (RKHS). Currently, there exist solutions to only special cases of this problem. To solve this constrained problem with kernels, we first generalize the Representer Theorem to a class of saddle-point problems defined over RKHS. Then, we develop a primal-dual method which that executes alternating projected primal/dual stochastic gradient descent/ascent on the dual-augmented Lagrangian of the problem. The primal projection sets are low-dimensional subspaces of the ambient function space, which are greedily constructed using matching pursuit. By tuning the projection-induced error to the algorithm step-size, we are able to establish mean convergence in both primal objective sub-optimality and constraint violation, to respective O(√T) and O}(T3/4) neighborhoods. Here, T is the final iteration index and the constant step-size is chosen as 1/√T with 1/T approximation budget. Finally, we demonstrate experimentally the effectiveness of the proposed method for risk-aware supervised learning.
Original language | English (US) |
---|---|
Article number | 8678800 |
Pages (from-to) | 2528-2542 |
Number of pages | 15 |
Journal | IEEE Transactions on Signal Processing |
Volume | 67 |
Issue number | 10 |
DOIs | |
State | Published - May 15 2019 |
Externally published | Yes |
Keywords
- Machine learning
- optimization methods
- radial basis function networks
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering