Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks

David Gamarnik, Eren C. Kzldag, Ilias Zadik

Research output: Contribution to journalArticlepeer-review

Abstract

We consider the problem of finding a two-layer neural network with sigmoid, rectified linear unit (ReLU), or binary step activation functions that 'fits' a training data set as accurately as possible as quantified by the training error; and study the following question: does a low training error guarantee that the norm of the output layer (outer norm) itself is small? We answer affirmatively this question for the case of non-negative output weights. Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm. Notably, our results (a) have a polynomial (in d) sample complexity, (b) are independent of the number of hidden units (which can potentially be very high), (c) are oblivious to the training algorithm; and (d) require quite mild assumptions on the data (in particular the input vector XRd need not have independent coordinates). We then leverage our bounds to establish generalization guarantees for such networks through fat-shattering dimension, a scale-sensitive measure of the complexity class that the network architectures we investigate belong to. Notably, our generalization bounds also have good sample complexity (polynomials in d with a low degree), and are in fact near-linear for some important cases of interest.

Original languageEnglish (US)
Pages (from-to)1310-1319
Number of pages10
JournalIEEE Transactions on Signal Processing
Volume70
DOIs
StatePublished - 2022
Externally publishedYes

Keywords

  • covering number
  • Deep learning
  • gradient descent
  • neural networks
  • sample complexity
  • self-regularity

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks'. Together they form a unique fingerprint.

Cite this