Toward a Controllable Disentanglement Network

Zengjie Song, Oluwasanmi Koyejo, Jiangshe Zhang

Research output: Contribution to journalArticlepeer-review


This article addresses two crucial problems of learning disentangled image representations, namely, controlling the degree of disentanglement during image editing, and balancing the disentanglement strength and the reconstruction quality. To encourage disentanglement, we devise distance covariance-based decorrelation regularization. Further, for the reconstruction step, our model leverages a soft target representation combined with the latent image code. By exploring the real-valued space of the soft target representation, we are able to synthesize novel images with the designated properties. To improve the perceptual quality of images generated by autoencoder (AE)-based models, we extend the encoder-decoder architecture with the generative adversarial network (GAN) by collapsing the AE decoder and the GAN generator into one. We also design a classification-based protocol to quantitatively evaluate the disentanglement strength of our model. The experimental results showcase the benefits of the proposed model.

Original languageEnglish (US)
Pages (from-to)2491-2504
Number of pages14
JournalIEEE Transactions on Cybernetics
Issue number4
StatePublished - Apr 1 2022


  • Autoencoder (AE)
  • Decorrelation regularization
  • Generative adversarial network (GAN)
  • Image generation
  • Representation learning

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Information Systems
  • Human-Computer Interaction
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this