TY - JOUR
T1 - Fair Machine Unlearning
T2 - 27th International Conference on Artificial Intelligence and Statistics, AISTATS 2024
AU - Oesterling, Alex
AU - Ma, Jiaqi
AU - Calmon, Flavio P.
AU - Lakkaraju, Himabindu
N1 - This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2140743. This work is also supported in part by the NSF awards IIS-2008461, IIS-2040989, IIS-2238714, FAI-2040880, and research awards from Google, JP Morgan, Amazon, Adobe, Harvard Data Science Initiative, and the Digital, Data, and Design (D3) Institute at Harvard. The views expressed here are those of the authors and do not reflect the official policy or position of the funding agencies.
PY - 2024
Y1 - 2024
N2 - The Right to be Forgotten is a core principle outlined by regulatory frameworks such as the EU’s General Data Protection Regulation (GDPR). This principle allows individuals to request that their personal data be deleted from deployed machine learning models. While “forgetting” can be naively achieved by retraining on the remaining dataset, it is computationally expensive to do to so with each new request. As such, several machine unlearning methods have been proposed as efficient alternatives to retraining. These methods aim to approximate the predictive performance of retraining, but fail to consider how unlearning impacts other properties critical to real-world applications such as fairness. In this work, we demonstrate that most efficient unlearning methods cannot accommodate popular fairness interventions, and we propose the first fair machine unlearning method that can efficiently unlearn data instances from a fair objective. We derive theoretical results which demonstrate that our method can provably unlearn data and provably maintain fairness performance. Extensive experimentation with real-world datasets highlight the efficacy of our method at unlearning data instances while preserving fairness. Code is provided at https://github.com/AI4LIFE-GROUP/fair-unlearning.
AB - The Right to be Forgotten is a core principle outlined by regulatory frameworks such as the EU’s General Data Protection Regulation (GDPR). This principle allows individuals to request that their personal data be deleted from deployed machine learning models. While “forgetting” can be naively achieved by retraining on the remaining dataset, it is computationally expensive to do to so with each new request. As such, several machine unlearning methods have been proposed as efficient alternatives to retraining. These methods aim to approximate the predictive performance of retraining, but fail to consider how unlearning impacts other properties critical to real-world applications such as fairness. In this work, we demonstrate that most efficient unlearning methods cannot accommodate popular fairness interventions, and we propose the first fair machine unlearning method that can efficiently unlearn data instances from a fair objective. We derive theoretical results which demonstrate that our method can provably unlearn data and provably maintain fairness performance. Extensive experimentation with real-world datasets highlight the efficacy of our method at unlearning data instances while preserving fairness. Code is provided at https://github.com/AI4LIFE-GROUP/fair-unlearning.
UR - http://www.scopus.com/inward/record.url?scp=85194182935&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85194182935&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85194182935
SN - 2640-3498
VL - 238
SP - 3736
EP - 3744
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 2 May 2024 through 4 May 2024
ER -