Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems

Luo Luo, Haishan Ye, Zhichao Huang, Tong Zhang

Research output: Contribution to journalConference articlepeer-review

Abstract

We consider nonconvex-concave minimax optimization problems of the form minx maxy?Y f(x, y), where f is strongly-concave in y but possibly nonconvex in x and Y is a convex and compact set. We focus on the stochastic setting, where we can only access an unbiased stochastic gradient estimate of f at each iteration. This formulation includes many machine learning applications as special cases such as robust optimization and adversary training. We are interested in finding an O(e)-stationary point of the function F(·) = maxy?Y f(·, y). The most popular algorithm to solve this problem is stochastic gradient decent ascent, which requires O(?3e-4) stochastic gradient evaluations, where ? is the condition number. In this paper, we propose a novel method called Stochastic Recursive gradiEnt Descent Ascent (SREDA), which estimates gradients more efficiently using variance reduction. This method achieves the best known stochastic gradient complexity of O(?3e-3), and its dependency on e is optimal for this problem.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Externally publishedYes
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: Dec 6 2020Dec 12 2020

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems'. Together they form a unique fingerprint.

Cite this