Abstract
A recent breakthrough in deep learning theory shows that the training of overparameterized deep neural networks can be characterized by a kernel function called neural tangent kernel (NTK). However, it is known that this type of results does not perfectly match the practice, as NTK-based analysis requires the network weights to stay very close to their initialization throughout training, and cannot handle regularizers or gradient noises. In this paper, we provide a generalized neural tangent kernel analysis and show that noisy gradient descent with weight decay can still exhibit a “kernel-like” behavior. This implies that the training loss converges linearly up to a certain accuracy. We also establish a novel generalization error bound for two-layer neural networks trained by noisy gradient descent with weight decay.
Original language | English (US) |
---|---|
Journal | Advances in Neural Information Processing Systems |
Volume | 2020-December |
State | Published - 2020 |
Externally published | Yes |
Event | 34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online Duration: Dec 6 2020 → Dec 12 2020 |
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Signal Processing