TY - GEN
T1 - Deep Image Destruction
T2 - 26th International Conference on Pattern Recognition, ICPR 2022
AU - Choi, Jun Ho
AU - Zhang, Huan
AU - Kim, Jun Hyuk
AU - Hsieh, Cho Jui
AU - Lee, Jong Seok
N1 - Funding Information:
This work was supported in part by the Artificial Intelligence Graduate School Program, Yonsei University under Grant 2020-0-01361, and in part by the Ministry of Trade, Industry and Energy (MOTIE) under Grant P0014268.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Recently, the vulnerability of deep image classification models to adversarial attacks has been investigated. However, such an issue has not been thoroughly studied for image-to-image tasks that take an input image and generate an output image (e.g., colorization, denoising, deblurring, etc.) This paper presents comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks. For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints such as output quality degradation due to attacks, transferability of adversarial examples across different tasks, and characteristics of perturbations. We show that unlike image classification tasks, the performance degradation on image-to-image tasks largely differs depending on various factors, e.g., attack methods and task objectives. In addition, we analyze the effectiveness of conventional defense methods used for classification models in improving the robustness of the image-to-image models.
AB - Recently, the vulnerability of deep image classification models to adversarial attacks has been investigated. However, such an issue has not been thoroughly studied for image-to-image tasks that take an input image and generate an output image (e.g., colorization, denoising, deblurring, etc.) This paper presents comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks. For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints such as output quality degradation due to attacks, transferability of adversarial examples across different tasks, and characteristics of perturbations. We show that unlike image classification tasks, the performance degradation on image-to-image tasks largely differs depending on various factors, e.g., attack methods and task objectives. In addition, we analyze the effectiveness of conventional defense methods used for classification models in improving the robustness of the image-to-image models.
UR - http://www.scopus.com/inward/record.url?scp=85143606089&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85143606089&partnerID=8YFLogxK
U2 - 10.1109/ICPR56361.2022.9956577
DO - 10.1109/ICPR56361.2022.9956577
M3 - Conference contribution
AN - SCOPUS:85143606089
T3 - Proceedings - International Conference on Pattern Recognition
SP - 1287
EP - 1293
BT - 2022 26th International Conference on Pattern Recognition, ICPR 2022
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 21 August 2022 through 25 August 2022
ER -