TY - JOUR
T1 - Semi-Proximal Mirror-Prox for nonsmooth composite minimization
AU - He, Niao
AU - Harchaoui, Zaid
N1 - Funding Information:
The authors would like to thank A. Juditsky and A. Nemirovski for fruitful discussions. This work was supported by NSF Grant CMMI-1232623, LabEx Persyval-Lab (ANR-11-LABX-0025), project "Titan" (CNRS-Mastodons), project "Macaron" (ANR-14-CE23-0003-01), the MSR-Inria joint centre, and the Moore-Sloan Data Science Environment at NYU.
PY - 2015
Y1 - 2015
N2 - We propose a new first-order optimization algorithm to solve high-dimensional non-smooth composite minimization problems. Typical examples of such problems have an objective that decomposes into a non-smooth empirical risk part and a non-smooth regularization penalty. The proposed algorithm, called Semi-Proximal Mirror-Prox, leverages the saddle point representation of one part of the objective while handling the other part of the objective via linear minimization over the domain. The algorithm stands in contrast with more classical proximal gradient algorithms with smoothing, which require the computation of proximal operators at each iteration and can therefore be impractical for high-dimensional problems. We establish the theoretical convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal complexity bounds, i.e. O(1/∈2), for the number of calls to linear minimization oracle. We present promising experimental results showing the interest of the approach in comparison to competing methods.
AB - We propose a new first-order optimization algorithm to solve high-dimensional non-smooth composite minimization problems. Typical examples of such problems have an objective that decomposes into a non-smooth empirical risk part and a non-smooth regularization penalty. The proposed algorithm, called Semi-Proximal Mirror-Prox, leverages the saddle point representation of one part of the objective while handling the other part of the objective via linear minimization over the domain. The algorithm stands in contrast with more classical proximal gradient algorithms with smoothing, which require the computation of proximal operators at each iteration and can therefore be impractical for high-dimensional problems. We establish the theoretical convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal complexity bounds, i.e. O(1/∈2), for the number of calls to linear minimization oracle. We present promising experimental results showing the interest of the approach in comparison to competing methods.
UR - http://www.scopus.com/inward/record.url?scp=84965131055&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84965131055&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:84965131055
SN - 1049-5258
VL - 2015-January
SP - 3411
EP - 3419
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 29th Annual Conference on Neural Information Processing Systems, NIPS 2015
Y2 - 7 December 2015 through 12 December 2015
ER -