Stationary Points of a Shallow Neural Network with Quadratic Activations and the Global Optimality of the Gradient Descent Algorithm

David Gamarnik, Eren C. Kızıldag, Ilias Zadik

Research output: Contribution to journalArticlepeer-review

Abstract

We consider the problem of training a shallow neural network with quadratic activation functions and the generalization power of such trained networks. Assuming that the samples are generated by a full rank matrix W* of the hidden network node weights, we obtain the following results. We establish that all full-rank approximately stationary solutions of the risk minimization problem are also approximate global optimums of the risk (in-sample and population). As a consequence, we establish that, when trained on polynomially many samples, the gradient descent algorithm converges to the global optimum of the risk minimization problem regardless of the width of the network when it is initialized at some value ν*, which we compute. Furthermore, the network produced by the gradient descent has a near zero generalization error. Next, we establish that initializing the gradient descent algorithm below ν* is easily achieved when the weights of the ground truth matrix W are randomly generated and the matrix is sufficiently overparameterized. Finally, we identify a simple necessary and sufficient geometric condition on the size of the training set under which any global minimizer of the empirical risk has necessarily zero generalization error.

Original languageEnglish (US)
JournalMathematics of Operations Research
DOIs
StatePublished - 2024
Externally publishedYes

Keywords

  • empirical risk minimization
  • generalization
  • gradient descent
  • initialization
  • neural networks
  • optimization landscape
  • semicircle law

ASJC Scopus subject areas

  • General Mathematics
  • Computer Science Applications
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'Stationary Points of a Shallow Neural Network with Quadratic Activations and the Global Optimality of the Gradient Descent Algorithm'. Together they form a unique fingerprint.

Cite this