Dualing GANs

Yujia Li, Alexander Gerhard Schwing, Kuan Chieh Wang, Richard Zemel

Research output: Contribution to journalConference article

Abstract

Generative adversarial nets (GANs) are a promising technique for modeling a distribution from samples. It is however well known that GAN training suffers from instability due to the nature of its saddle point formulation. In this paper, we explore ways to tackle the instability problem by dualizing the discriminator. We start from linear discriminators in which case conjugate duality provides a mechanism to reformulate the saddle point objective into a maximization problem, such that both the generator and the discriminator of this 'dualing GAN' act in concert. We then demonstrate how to extend this intuition to non-linear formulations. For GANs with linear discriminators our approach is able to remove the instability in training, while for GANs with nonlinear discriminators our approach provides an alternative to the commonly used GAN training algorithm.

Original languageEnglish (US)
Pages (from-to)5607-5617
Number of pages11
JournalAdvances in Neural Information Processing Systems
Volume2017-December
StatePublished - Jan 1 2017
Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
Duration: Dec 4 2017Dec 9 2017

Fingerprint

Discriminators

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Li, Y., Schwing, A. G., Wang, K. C., & Zemel, R. (2017). Dualing GANs. Advances in Neural Information Processing Systems, 2017-December, 5607-5617.

Dualing GANs. / Li, Yujia; Schwing, Alexander Gerhard; Wang, Kuan Chieh; Zemel, Richard.

In: Advances in Neural Information Processing Systems, Vol. 2017-December, 01.01.2017, p. 5607-5617.

Research output: Contribution to journalConference article

Li, Y, Schwing, AG, Wang, KC & Zemel, R 2017, 'Dualing GANs', Advances in Neural Information Processing Systems, vol. 2017-December, pp. 5607-5617.
Li Y, Schwing AG, Wang KC, Zemel R. Dualing GANs. Advances in Neural Information Processing Systems. 2017 Jan 1;2017-December:5607-5617.
Li, Yujia ; Schwing, Alexander Gerhard ; Wang, Kuan Chieh ; Zemel, Richard. / Dualing GANs. In: Advances in Neural Information Processing Systems. 2017 ; Vol. 2017-December. pp. 5607-5617.
@article{6d3cff5f4188486eaa6e55992e3daefa,
title = "Dualing GANs",
abstract = "Generative adversarial nets (GANs) are a promising technique for modeling a distribution from samples. It is however well known that GAN training suffers from instability due to the nature of its saddle point formulation. In this paper, we explore ways to tackle the instability problem by dualizing the discriminator. We start from linear discriminators in which case conjugate duality provides a mechanism to reformulate the saddle point objective into a maximization problem, such that both the generator and the discriminator of this 'dualing GAN' act in concert. We then demonstrate how to extend this intuition to non-linear formulations. For GANs with linear discriminators our approach is able to remove the instability in training, while for GANs with nonlinear discriminators our approach provides an alternative to the commonly used GAN training algorithm.",
author = "Yujia Li and Schwing, {Alexander Gerhard} and Wang, {Kuan Chieh} and Richard Zemel",
year = "2017",
month = "1",
day = "1",
language = "English (US)",
volume = "2017-December",
pages = "5607--5617",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

TY - JOUR

T1 - Dualing GANs

AU - Li, Yujia

AU - Schwing, Alexander Gerhard

AU - Wang, Kuan Chieh

AU - Zemel, Richard

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Generative adversarial nets (GANs) are a promising technique for modeling a distribution from samples. It is however well known that GAN training suffers from instability due to the nature of its saddle point formulation. In this paper, we explore ways to tackle the instability problem by dualizing the discriminator. We start from linear discriminators in which case conjugate duality provides a mechanism to reformulate the saddle point objective into a maximization problem, such that both the generator and the discriminator of this 'dualing GAN' act in concert. We then demonstrate how to extend this intuition to non-linear formulations. For GANs with linear discriminators our approach is able to remove the instability in training, while for GANs with nonlinear discriminators our approach provides an alternative to the commonly used GAN training algorithm.

AB - Generative adversarial nets (GANs) are a promising technique for modeling a distribution from samples. It is however well known that GAN training suffers from instability due to the nature of its saddle point formulation. In this paper, we explore ways to tackle the instability problem by dualizing the discriminator. We start from linear discriminators in which case conjugate duality provides a mechanism to reformulate the saddle point objective into a maximization problem, such that both the generator and the discriminator of this 'dualing GAN' act in concert. We then demonstrate how to extend this intuition to non-linear formulations. For GANs with linear discriminators our approach is able to remove the instability in training, while for GANs with nonlinear discriminators our approach provides an alternative to the commonly used GAN training algorithm.

UR - http://www.scopus.com/inward/record.url?scp=85047016037&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85047016037&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85047016037

VL - 2017-December

SP - 5607

EP - 5617

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -