Image generation step by step: animation generation-image translation

Beibei Jing, Hongwei Ding, Zhijun Yang, Bo Li, Qianlin Liu

Research output: Contribution to journalArticlepeer-review


Generative adversarial networks play an important role in image generation, but the successful generation of high-resolution images from complex data sets remains a challenging goal. In this paper, we propose the LGAN (Link Generative Adversarial Networks) model, which can effectively enhance the quality of the synthesized images. The LGAN model consists of two parts, G1 and G2. G1 is responsible for the unconditional generation part, which generates anime images with highly abstract features containing few coefficients but continuous image elements covering the overall image features. Moreover, G2 is responsible for the conditional generation part (image translation), consisting of mapping and Superresolution networks. The mapping network fills the output of G1 into the real-world image after semantic segmentation or edge detection processing; the Superresolution network super-resolves the actual picture after completing mapping to improve the image’s resolution. In the comparison test with WGAN, SAGAN, WGAN-GP and PG-GAN, this paper’s LGAN(SEG) leads 64.36 and 12.28, respectively, fully proving the model’s superiority.

Original languageEnglish (US)
Pages (from-to)8087-8100
Number of pages14
JournalApplied Intelligence
Issue number7
StatePublished - May 2022
Externally publishedYes


  • Anime images conditional generation part
  • Generative adversarial networks
  • LGAN (Link Generative Adversarial Networks)
  • Super-resolution network
  • Unconditional generation part

ASJC Scopus subject areas

  • Artificial Intelligence


Dive into the research topics of 'Image generation step by step: animation generation-image translation'. Together they form a unique fingerprint.

Cite this