Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk

Zhangheng Li, Junyuan Hong, Bo Li, Zhangyang Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

While diffusion models have recently demonstrated remarkable progress in generating realistic images, privacy risks also arise: published models or APIs could generate training images and thus leak privacy-sensitive training information. In this paper, we reveal a new risk, Shake-to-Leak (S2L), that fine-tuning the pre-trained models with manipulated data can amplify the existing privacy risks. We demonstrate that S2L could occur in various standard fine-tuning strategies for diffusion models, including concept-injection methods (DreamBooth and Textual Inversion) and parameter-efficient methods (LoRA and Hypernetwork), as well as their combinations. In the worst case, S2L can amplify the state-of-the-art membership inference attack (MIA) on diffusion models by 5.4% (absolute difference) AUC and can increase extracted private samples from almost 0 samples to 16.3 samples on average per target domain. This discovery underscores that the privacy risk with diffusion models is even more severe than previously recognized. Codes are available at https://github.com/VITA-Group/Shake-to-Leak.

Original languageEnglish (US)
Title of host publicationProceedings - IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages18-32
Number of pages15
ISBN (Electronic)9798350349504
DOIs
StatePublished - 2024
Event2024 IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024 - Toronto, Canada
Duration: Apr 9 2024Apr 11 2024

Publication series

NameProceedings - IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024

Conference

Conference2024 IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024
Country/TerritoryCanada
CityToronto
Period4/9/244/11/24

Keywords

  • Deep learning
  • diffusion models
  • fine-tuning
  • generative models
  • privacy risk

ASJC Scopus subject areas

  • Artificial Intelligence
  • Safety, Risk, Reliability and Quality
  • Modeling and Simulation

Fingerprint

Dive into the research topics of 'Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk'. Together they form a unique fingerprint.

Cite this