Skip to main navigation
Skip to search
Skip to main content
Illinois Experts Home
LOGIN & Help
Home
Profiles
Research units
Research & Scholarship
Datasets
Honors
Press/Media
Activities
Search by expertise, name or affiliation
Gradient descent aligns the layers of deep linear networks
Ziwei Ji, Matus Telgarsky
Siebel School of Computing and Data Science
Electrical and Computer Engineering
Research output
:
Contribution to conference
›
Paper
›
peer-review
Overview
Fingerprint
Fingerprint
Dive into the research topics of 'Gradient descent aligns the layers of deep linear networks'. Together they form a unique fingerprint.
Sort by
Weight
Alphabetically
Keyphrases
Gradient Descent
100%
Deep Linear Networks
100%
Weight Matrix
75%
Gradient Flow
50%
Linear Function
25%
Loss Function
25%
Rank-one Approximation
25%
Maximum Margin
25%
Rank-1 Matrices
25%
Matrix Alignment
25%
Logistic Loss
25%
Implicit Regularization
25%
Linearly Separable Data
25%
Binary Cross Entropy
25%
Asymptotic Weight
25%
Decreasing Losses
25%
Mathematics
Weight Matrix
100%
Gradient Flow
66%
Asymptotics
33%
Matrix (Mathematics)
33%
Regularization
33%
Same Direction
33%
Step Size
33%
Linear Function
33%
Loss Function
33%
Cross-Entropy
33%