TY - GEN
T1 - REFIT
T2 - 16th ACM Asia Conference on Computer and Communications Security, ASIA CCS 2021
AU - Chen, Xinyun
AU - Wang, Wenxiao
AU - Bender, Chris
AU - Ding, Yiming
AU - Jia, Ruoxi
AU - Li, Bo
AU - Song, Dawn
N1 - Publisher Copyright:
© 2021 Owner/Author.
PY - 2021/5/24
Y1 - 2021/5/24
N2 - Training deep neural networks from scratch could be computationally expensive and requires a lot of training data. Recent work has explored different watermarking techniques to protect the pre-trained deep neural networks from potential copyright infringements. However, these techniques could be vulnerable to watermark removal attacks. In this work, we propose REFIT, a unified watermark removal framework based on fine-tuning, which does not rely on the knowledge of the watermarks, and is effective against a wide range of watermarking schemes. In particular, we conduct a comprehensive study of a realistic attack scenario where the adversary has limited training data, which has not been emphasized in prior work on attacks against watermarking schemes. To effectively remove the watermarks without compromising the model functionality under this weak threat model, we propose two techniques that are incorporated into our fine-tuning framework: (1) an adaption of the elastic weight consolidation (EWC) algorithm, which is originally proposed for mitigating the catastrophic forgetting phenomenon; and (2) unlabeled data augmentation (AU), where we leverage auxiliary unlabeled data from other sources. Our extensive evaluation shows the effectiveness of REFIT against diverse watermark embedding schemes. In particular, both EWC and AU significantly decrease the amount of labeled training data needed for effective watermark removal, and the unlabeled data samples used for AU do not necessarily need to be drawn from the same distribution as the benign data for model evaluation. The experimental results demonstrate that our fine-tuning based watermark removal attacks could pose real threats to the copyright of pre-trained models, and thus highlight the importance of further investigating the watermarking problem and proposing more robust watermark embedding schemes against the attacks.
AB - Training deep neural networks from scratch could be computationally expensive and requires a lot of training data. Recent work has explored different watermarking techniques to protect the pre-trained deep neural networks from potential copyright infringements. However, these techniques could be vulnerable to watermark removal attacks. In this work, we propose REFIT, a unified watermark removal framework based on fine-tuning, which does not rely on the knowledge of the watermarks, and is effective against a wide range of watermarking schemes. In particular, we conduct a comprehensive study of a realistic attack scenario where the adversary has limited training data, which has not been emphasized in prior work on attacks against watermarking schemes. To effectively remove the watermarks without compromising the model functionality under this weak threat model, we propose two techniques that are incorporated into our fine-tuning framework: (1) an adaption of the elastic weight consolidation (EWC) algorithm, which is originally proposed for mitigating the catastrophic forgetting phenomenon; and (2) unlabeled data augmentation (AU), where we leverage auxiliary unlabeled data from other sources. Our extensive evaluation shows the effectiveness of REFIT against diverse watermark embedding schemes. In particular, both EWC and AU significantly decrease the amount of labeled training data needed for effective watermark removal, and the unlabeled data samples used for AU do not necessarily need to be drawn from the same distribution as the benign data for model evaluation. The experimental results demonstrate that our fine-tuning based watermark removal attacks could pose real threats to the copyright of pre-trained models, and thus highlight the importance of further investigating the watermarking problem and proposing more robust watermark embedding schemes against the attacks.
KW - fine-tuning
KW - neural networks
KW - watermark removal
UR - http://www.scopus.com/inward/record.url?scp=85108071874&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85108071874&partnerID=8YFLogxK
U2 - 10.1145/3433210.3453079
DO - 10.1145/3433210.3453079
M3 - Conference contribution
AN - SCOPUS:85108071874
T3 - ASIA CCS 2021 - Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security
SP - 321
EP - 335
BT - ASIA CCS 2021 - Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security
PB - Association for Computing Machinery
Y2 - 7 June 2021 through 11 June 2021
ER -