TY - GEN
T1 - Unsupervised image-to-image translation with stacked cycle-consistent adversarial networks
AU - Li, Minjun
AU - Huang, Haozhi
AU - Ma, Lin
AU - Liu, Wei
AU - Zhang, Tong
AU - Jiang, Yugang
N1 - This work was supported by two projects from NSFC(#61622204 and #61572134) and two projects from STCSM(#16JC1420401 and �#16QA1400500).
Acknowledgement. This work was supported by two projects from NSFC (#61622204 and #61572134) and two projects from STCSM (#16JC1420401 and #16QA1400500).
PY - 2018
Y1 - 2018
N2 - Recent studies on unsupervised image-to-image translation have made remarkable progress by training a pair of generative adversarial networks with a cycle-consistent loss. However, such unsupervised methods may generate inferior results when the image resolution is high or the two image domains are of significant appearance differences, such as the translations between semantic layouts and natural images in the Cityscapes dataset. In this paper, we propose novel Stacked Cycle-Consistent Adversarial Networks (SCANs) by decomposing a single translation into multi-stage transformations, which not only boost the image translation quality but also enable higher resolution image-to-image translation in a coarse-to-fine fashion. Moreover, to properly exploit the information from the previous stage, an adaptive fusion block is devised to learn a dynamic integration of the current stage’s output and the previous stage’s output. Experiments on multiple datasets demonstrate that our proposed approach can improve the translation quality compared with previous single-stage unsupervised methods.
AB - Recent studies on unsupervised image-to-image translation have made remarkable progress by training a pair of generative adversarial networks with a cycle-consistent loss. However, such unsupervised methods may generate inferior results when the image resolution is high or the two image domains are of significant appearance differences, such as the translations between semantic layouts and natural images in the Cityscapes dataset. In this paper, we propose novel Stacked Cycle-Consistent Adversarial Networks (SCANs) by decomposing a single translation into multi-stage transformations, which not only boost the image translation quality but also enable higher resolution image-to-image translation in a coarse-to-fine fashion. Moreover, to properly exploit the information from the previous stage, an adaptive fusion block is devised to learn a dynamic integration of the current stage’s output and the previous stage’s output. Experiments on multiple datasets demonstrate that our proposed approach can improve the translation quality compared with previous single-stage unsupervised methods.
KW - Genearative adverserial network (GAN)
KW - Image-to-image translation
KW - Unsupervised learning
UR - https://www.scopus.com/pages/publications/85055132185
UR - https://www.scopus.com/pages/publications/85055132185#tab=citedBy
U2 - 10.1007/978-3-030-01240-3_12
DO - 10.1007/978-3-030-01240-3_12
M3 - Conference contribution
AN - SCOPUS:85055132185
SN - 9783030012397
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 186
EP - 201
BT - Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings
A2 - Hebert, Martial
A2 - Ferrari, Vittorio
A2 - Sminchisescu, Cristian
A2 - Weiss, Yair
PB - Springer
T2 - 15th European Conference on Computer Vision, ECCV 2018
Y2 - 8 September 2018 through 14 September 2018
ER -