Gradient descent aligns the layers of deep linear networks

Research output: Contribution to conferencePaper

Abstract

This paper establishes risk convergence and asymptotic weight matrix alignment - a form of implicit regularization - of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized ith weight matrix asymptotically equals its rank-1 approximation uivi>; (iii) these rank-1 matrices are aligned across layers, meaning |vi>+1ui| → 1. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network - the product of its weight matrices - converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.

Original languageEnglish (US)
StatePublished - Jan 1 2019
Event7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States
Duration: May 6 2019May 9 2019

Conference

Conference7th International Conference on Learning Representations, ICLR 2019
CountryUnited States
CityNew Orleans
Period5/6/195/9/19

Fingerprint

Linear networks
entropy
logistics
Logistics
Entropy
Layer
Descent
Alignment

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this

Ji, Z., & Telgarsky, M. J. (2019). Gradient descent aligns the layers of deep linear networks. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.

Gradient descent aligns the layers of deep linear networks. / Ji, Ziwei; Telgarsky, Matus Jan.

2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.

Research output: Contribution to conferencePaper

Ji, Z & Telgarsky, MJ 2019, 'Gradient descent aligns the layers of deep linear networks' Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States, 5/6/19 - 5/9/19, .
Ji Z, Telgarsky MJ. Gradient descent aligns the layers of deep linear networks. 2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.
Ji, Ziwei ; Telgarsky, Matus Jan. / Gradient descent aligns the layers of deep linear networks. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.
@conference{0389fab289b64d0b9fd31160f91beb94,
title = "Gradient descent aligns the layers of deep linear networks",
abstract = "This paper establishes risk convergence and asymptotic weight matrix alignment - a form of implicit regularization - of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized ith weight matrix asymptotically equals its rank-1 approximation uivi>; (iii) these rank-1 matrices are aligned across layers, meaning |vi>+1ui| → 1. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network - the product of its weight matrices - converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.",
author = "Ziwei Ji and Telgarsky, {Matus Jan}",
year = "2019",
month = "1",
day = "1",
language = "English (US)",
note = "7th International Conference on Learning Representations, ICLR 2019 ; Conference date: 06-05-2019 Through 09-05-2019",

}

TY - CONF

T1 - Gradient descent aligns the layers of deep linear networks

AU - Ji, Ziwei

AU - Telgarsky, Matus Jan

PY - 2019/1/1

Y1 - 2019/1/1

N2 - This paper establishes risk convergence and asymptotic weight matrix alignment - a form of implicit regularization - of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized ith weight matrix asymptotically equals its rank-1 approximation uivi>; (iii) these rank-1 matrices are aligned across layers, meaning |vi>+1ui| → 1. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network - the product of its weight matrices - converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.

AB - This paper establishes risk convergence and asymptotic weight matrix alignment - a form of implicit regularization - of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized ith weight matrix asymptotically equals its rank-1 approximation uivi>; (iii) these rank-1 matrices are aligned across layers, meaning |vi>+1ui| → 1. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network - the product of its weight matrices - converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.

UR - http://www.scopus.com/inward/record.url?scp=85071153615&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071153615&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85071153615

ER -