Abstract
This paper establishes risk convergence and asymptotic weight matrix alignment - a form of implicit regularization - of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized ith weight matrix asymptotically equals its rank-1 approximation uivi>; (iii) these rank-1 matrices are aligned across layers, meaning |vi>+1ui| → 1. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network - the product of its weight matrices - converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.
Original language | English (US) |
---|---|
State | Published - Jan 1 2019 |
Event | 7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States Duration: May 6 2019 → May 9 2019 |
Conference
Conference | 7th International Conference on Learning Representations, ICLR 2019 |
---|---|
Country | United States |
City | New Orleans |
Period | 5/6/19 → 5/9/19 |
Fingerprint
ASJC Scopus subject areas
- Education
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics
Cite this
Gradient descent aligns the layers of deep linear networks. / Ji, Ziwei; Telgarsky, Matus Jan.
2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.Research output: Contribution to conference › Paper
}
TY - CONF
T1 - Gradient descent aligns the layers of deep linear networks
AU - Ji, Ziwei
AU - Telgarsky, Matus Jan
PY - 2019/1/1
Y1 - 2019/1/1
N2 - This paper establishes risk convergence and asymptotic weight matrix alignment - a form of implicit regularization - of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized ith weight matrix asymptotically equals its rank-1 approximation uivi>; (iii) these rank-1 matrices are aligned across layers, meaning |vi>+1ui| → 1. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network - the product of its weight matrices - converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.
AB - This paper establishes risk convergence and asymptotic weight matrix alignment - a form of implicit regularization - of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized ith weight matrix asymptotically equals its rank-1 approximation uivi>; (iii) these rank-1 matrices are aligned across layers, meaning |vi>+1ui| → 1. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network - the product of its weight matrices - converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.
UR - http://www.scopus.com/inward/record.url?scp=85071153615&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071153615&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85071153615
ER -