Characterizing the implicit bias via a primal-dual analysis

Ziwei Ji, Matus Telgarsky

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper shows that the implicit bias of gradient descent on linearly separable data is exactly characterized by the optimal solution of a dual optimization problem given by a smoothed margin, even for general losses. This is in contrast to prior results, which are often tailored to exponentially-tailed losses. For the exponential loss specifically, with n training examples and t gradient descent steps, our dual analysis further allows us to prove an O(ln(n)/ln(t)) convergence rate to the `2 maximum margin direction, when a constant step size is used. This rate is tight in both n and t, which has not been presented by prior work. On the other hand, with a properly chosen but aggressive step size schedule, we prove O(1/t) rates for both `2 margin maximization and implicit bias, whereas prior work (including all first-order methods for the general hard-margin linear SVM problem) proved Oe(1/t) margin rates, or O(1/t) margin rates to a suboptimal margin, with an implied (slower) bias rate. Our key observations include that gradient descent on the primal variable naturally induces a mirror descent update on the dual variable, and that the dual objective in this setting is smooth enough to give a faster rate.

Original languageEnglish (US)
Pages (from-to)772-804
Number of pages33
JournalProceedings of Machine Learning Research
Volume132
StatePublished - 2021
Event32nd International Conference on Algorithmic Learning Theory, ALT 2021 - Virtual, Online
Duration: Mar 16 2021Mar 19 2021

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Characterizing the implicit bias via a primal-dual analysis'. Together they form a unique fingerprint.

Cite this