Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models

Jialuo Chen, Jingyi Wang, Tinglan Peng, Youcheng Sun, Peng Cheng, Shouling Ji, Xingjun Ma, Bo Li, Dawn Song

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Deep learning models, especially those large-scale and high-performance ones, can be very costly to train, demanding a considerable amount of data and computational resources. As a result, deep learning models have become one of the most valuable assets in modern artificial intelligence. Unauthorized duplication or reproduction of deep learning models can lead to copyright infringement and cause huge economic losses to model owners, calling for effective copyright protection techniques. Existing protection techniques are mostly based on watermarking, which embeds an owner-specified watermark into the model. While being able to provide exact ownership verification, these techniques are 1) invasive, i.e., they need to tamper with the training process, which may affect the model utility or introduce new security risks into the model; 2) prone to adaptive attacks that attempt to remove/replace the watermark or adversarially block the retrieval of the watermark; and 3) not robust to the emerging model extraction attacks. Latest fingerprinting work on deep learning models, though being non-invasive, also falls short when facing the diverse and ever-growing attack scenarios.In this paper, we propose a novel testing framework for deep learning copyright protection: DEEPJUDGE. DEEPJUDGE quantitatively tests the similarities between two deep learning models: a victim model and a suspect model. It leverages a diverse set of testing metrics and efficient test case generation algorithms to produce a chain of supporting evidence to help determine whether a suspect model is a copy of the victim model. Advantages of DEEPJUDGE include: 1) non-invasive, as it works directly on the model and does not tamper with the training process; 2) efficient, as it only needs a small set of seed test cases and a quick scan of the two models; 3) flexible, i.e., it can easily incorporate new testing metrics or test case generation methods to obtain more confident and robust judgement; and 4) fairly robust to model extraction attacks and adaptive attacks. We verify the effectiveness of DEEPJUDGE under three typical copyright infringement scenarios, including model finetuning, pruning and extraction, via extensive experiments on both image classification and speech recognition datasets with a variety of model architectures.

Original languageEnglish (US)
Title of host publicationProceedings - 43rd IEEE Symposium on Security and Privacy, SP 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages18
ISBN (Electronic)9781665413169
StatePublished - 2022
Externally publishedYes
Event43rd IEEE Symposium on Security and Privacy, SP 2022 - San Francisco, United States
Duration: May 23 2022May 26 2022

Publication series

NameProceedings - IEEE Symposium on Security and Privacy
ISSN (Print)1081-6011


Conference43rd IEEE Symposium on Security and Privacy, SP 2022
Country/TerritoryUnited States
CitySan Francisco

ASJC Scopus subject areas

  • Safety, Risk, Reliability and Quality
  • Software
  • Computer Networks and Communications


Dive into the research topics of 'Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models'. Together they form a unique fingerprint.

Cite this