TY - GEN
T1 - Entangled watermarks as a defense against model extraction
AU - Jia, Hengrui
AU - Choquette-Choo, Christopher A.
AU - Chandrasekaran, Varun
AU - Papernot, Nicolas
N1 - Funding Information:
The authors would like to thank Carrie Gates for shepherding this paper. This research was funded by CIFAR, DARPA GARD, Microsoft, and NSERC. VC was funded in part by the Landweber Fellowship.
Publisher Copyright:
© 2021 by The USENIX Association. All rights reserved.
PY - 2021
Y1 - 2021
N2 - Machine learning involves expensive data collection and training procedures. Model owners may be concerned that valuable intellectual property can be leaked if adversaries mount model extraction attacks. As it is difficult to defend against model extraction without sacrificing significant prediction accuracy, watermarking instead leverages unused model capacity to have the model overfit to outlier input-output pairs. Such pairs are watermarks, which are not sampled from the task distribution and are only known to the defender. The defender then demonstrates knowledge of the input-output pairs to claim ownership of the model at inference. The effectiveness of watermarks remains limited because they are distinct from the task distribution and can thus be easily removed through compression or other forms of knowledge transfer. We introduce Entangled Watermarking Embeddings (EWE). Our approach encourages the model to learn features for classifying data that is sampled from the task distribution and data that encodes watermarks. An adversary attempting to remove watermarks that are entangled with legitimate data is also forced to sacrifice performance on legitimate data. Experiments on MNIST, Fashion-MNIST, CIFAR-10, and Speech Commands validate that the defender can claim model ownership with 95% confidence with less than 100 queries to the stolen copy, at a modest cost below 0.81 percentage points on average in the defended model's performance.
AB - Machine learning involves expensive data collection and training procedures. Model owners may be concerned that valuable intellectual property can be leaked if adversaries mount model extraction attacks. As it is difficult to defend against model extraction without sacrificing significant prediction accuracy, watermarking instead leverages unused model capacity to have the model overfit to outlier input-output pairs. Such pairs are watermarks, which are not sampled from the task distribution and are only known to the defender. The defender then demonstrates knowledge of the input-output pairs to claim ownership of the model at inference. The effectiveness of watermarks remains limited because they are distinct from the task distribution and can thus be easily removed through compression or other forms of knowledge transfer. We introduce Entangled Watermarking Embeddings (EWE). Our approach encourages the model to learn features for classifying data that is sampled from the task distribution and data that encodes watermarks. An adversary attempting to remove watermarks that are entangled with legitimate data is also forced to sacrifice performance on legitimate data. Experiments on MNIST, Fashion-MNIST, CIFAR-10, and Speech Commands validate that the defender can claim model ownership with 95% confidence with less than 100 queries to the stolen copy, at a modest cost below 0.81 percentage points on average in the defended model's performance.
UR - http://www.scopus.com/inward/record.url?scp=85104223567&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85104223567&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85104223567
T3 - Proceedings of the 30th USENIX Security Symposium
SP - 1937
EP - 1954
BT - Proceedings of the 30th USENIX Security Symposium
PB - USENIX Association
T2 - 30th USENIX Security Symposium, USENIX Security 2021
Y2 - 11 August 2021 through 13 August 2021
ER -