Dualing GANs

Yujia Li, Alexander Schwing, Kuan Chieh Wang, Richard Zemel

Research output: Contribution to journalConference article


Generative adversarial nets (GANs) are a promising technique for modeling a distribution from samples. It is however well known that GAN training suffers from instability due to the nature of its saddle point formulation. In this paper, we explore ways to tackle the instability problem by dualizing the discriminator. We start from linear discriminators in which case conjugate duality provides a mechanism to reformulate the saddle point objective into a maximization problem, such that both the generator and the discriminator of this 'dualing GAN' act in concert. We then demonstrate how to extend this intuition to non-linear formulations. For GANs with linear discriminators our approach is able to remove the instability in training, while for GANs with nonlinear discriminators our approach provides an alternative to the commonly used GAN training algorithm.

Original languageEnglish (US)
Pages (from-to)5607-5617
Number of pages11
JournalAdvances in Neural Information Processing Systems
StatePublished - Jan 1 2017
Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
Duration: Dec 4 2017Dec 9 2017


ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Li, Y., Schwing, A., Wang, K. C., & Zemel, R. (2017). Dualing GANs. Advances in Neural Information Processing Systems, 2017-December, 5607-5617.