TY - JOUR
T1 - Image generation step by step
T2 - animation generation-image translation
AU - Jing, Beibei
AU - Ding, Hongwei
AU - Yang, Zhijun
AU - Li, Bo
AU - Liu, Qianlin
N1 - Funding Information:
This work is partially supported by the National NaturalScience Foundation of China (61461053) and Yunnan University of the China Postgraduate Science Foundation under Grant (2020306).
Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2022/5
Y1 - 2022/5
N2 - Generative adversarial networks play an important role in image generation, but the successful generation of high-resolution images from complex data sets remains a challenging goal. In this paper, we propose the LGAN (Link Generative Adversarial Networks) model, which can effectively enhance the quality of the synthesized images. The LGAN model consists of two parts, G1 and G2. G1 is responsible for the unconditional generation part, which generates anime images with highly abstract features containing few coefficients but continuous image elements covering the overall image features. Moreover, G2 is responsible for the conditional generation part (image translation), consisting of mapping and Superresolution networks. The mapping network fills the output of G1 into the real-world image after semantic segmentation or edge detection processing; the Superresolution network super-resolves the actual picture after completing mapping to improve the image’s resolution. In the comparison test with WGAN, SAGAN, WGAN-GP and PG-GAN, this paper’s LGAN(SEG) leads 64.36 and 12.28, respectively, fully proving the model’s superiority.
AB - Generative adversarial networks play an important role in image generation, but the successful generation of high-resolution images from complex data sets remains a challenging goal. In this paper, we propose the LGAN (Link Generative Adversarial Networks) model, which can effectively enhance the quality of the synthesized images. The LGAN model consists of two parts, G1 and G2. G1 is responsible for the unconditional generation part, which generates anime images with highly abstract features containing few coefficients but continuous image elements covering the overall image features. Moreover, G2 is responsible for the conditional generation part (image translation), consisting of mapping and Superresolution networks. The mapping network fills the output of G1 into the real-world image after semantic segmentation or edge detection processing; the Superresolution network super-resolves the actual picture after completing mapping to improve the image’s resolution. In the comparison test with WGAN, SAGAN, WGAN-GP and PG-GAN, this paper’s LGAN(SEG) leads 64.36 and 12.28, respectively, fully proving the model’s superiority.
KW - Anime images conditional generation part
KW - Generative adversarial networks
KW - LGAN (Link Generative Adversarial Networks)
KW - Super-resolution network
KW - Unconditional generation part
UR - http://www.scopus.com/inward/record.url?scp=85117288407&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85117288407&partnerID=8YFLogxK
U2 - 10.1007/s10489-021-02835-z
DO - 10.1007/s10489-021-02835-z
M3 - Article
AN - SCOPUS:85117288407
VL - 52
SP - 8087
EP - 8100
JO - Applied Intelligence
JF - Applied Intelligence
SN - 0924-669X
IS - 7
ER -