A Framework of Composite Functional Gradient Methods for Generative Adversarial Models

Rie Johnson, Tong Zhang

Research output: Contribution to journalArticlepeer-review

Abstract

Generative adversarial networks (GAN) are trained through a minimax game between a generator and a discriminator to generate data that mimics observations. While being widely used, GAN training is known to be empirically unstable. This paper presents a new theory for generative adversarial methods that does not rely on the traditional minimax formulation. Our theory shows that with a strong discriminator, a good generator can be obtained by composite functional gradient learning, so that several distance measures (including the KL divergence and the JS divergence) between the probability distributions of real data and generated data are simultaneously improved after each functional gradient step until converging to zero. This new point of view leads to stable procedures for training generative models. It also gives a new theoretical insight into the original GAN. Empirical results on image generation show the effectiveness of our new method.

Original languageEnglish (US)
Article number8744312
Pages (from-to)17-32
Number of pages16
JournalIEEE transactions on pattern analysis and machine intelligence
Volume43
Issue number1
DOIs
StatePublished - Jan 1 2021
Externally publishedYes

Keywords

  • functional gradient learning
  • Generative adversarial models
  • image generation
  • neural networks

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'A Framework of Composite Functional Gradient Methods for Generative Adversarial Models'. Together they form a unique fingerprint.

Cite this