TY - GEN
T1 - Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
AU - Xie, Chulin
AU - Long, Yunhui
AU - Chen, Pin Yu
AU - Li, Qinbin
AU - Koyejo, Sanmi
AU - Li, Bo
N1 - Publisher Copyright:
© 2023 Copyright held by the owner/author(s).
PY - 2023/11/15
Y1 - 2023/11/15
N2 - Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users. As local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks. Meanwhile, to protect the privacy of local users, FL is usually trained in a differentially private way (DPFL). Thus, in this paper, we ask: What are the underlying connections between differential privacy and certified robustness in FL against poisoning attacks? Can we leverage the innate privacy property of DPFL to provide certified robustness for FL? Can we further improve the privacy of FL to improve such robustness certification? Wefi rst investigate both user-level and instance-level privacy of FL and provide formal privacy analysis to achieve improved instance-level privacy. We then provide two robustness certification criteria: certified prediction and certified attack inefficacy for DPFL on both user and instance levels. Theoretically, we provide the certified robustness of DPFL based on both criteria given a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under a range of poisoning attacks on different datasets. Wefi nd that increasing the level of privacy protection in DPFL results in stronger certified attack inefficacy; however, it does not necessarily lead to a stronger certified prediction. Thus, achieving the optimal certified prediction requires a proper balance between privacy and utility loss.
AB - Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users. As local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks. Meanwhile, to protect the privacy of local users, FL is usually trained in a differentially private way (DPFL). Thus, in this paper, we ask: What are the underlying connections between differential privacy and certified robustness in FL against poisoning attacks? Can we leverage the innate privacy property of DPFL to provide certified robustness for FL? Can we further improve the privacy of FL to improve such robustness certification? Wefi rst investigate both user-level and instance-level privacy of FL and provide formal privacy analysis to achieve improved instance-level privacy. We then provide two robustness certification criteria: certified prediction and certified attack inefficacy for DPFL on both user and instance levels. Theoretically, we provide the certified robustness of DPFL based on both criteria given a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under a range of poisoning attacks on different datasets. Wefi nd that increasing the level of privacy protection in DPFL results in stronger certified attack inefficacy; however, it does not necessarily lead to a stronger certified prediction. Thus, achieving the optimal certified prediction requires a proper balance between privacy and utility loss.
KW - Certified Robustness
KW - Differential Privacy
KW - Federated Learning
KW - Poisoning Attacks
UR - http://www.scopus.com/inward/record.url?scp=85179837772&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85179837772&partnerID=8YFLogxK
U2 - 10.1145/3576915.3623193
DO - 10.1145/3576915.3623193
M3 - Conference contribution
AN - SCOPUS:85179837772
T3 - CCS 2023 - Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security
SP - 1511
EP - 1525
BT - CCS 2023 - Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery
T2 - 30th ACM SIGSAC Conference on Computer and Communications Security, CCS 2023
Y2 - 26 November 2023 through 30 November 2023
ER -