TY - GEN
T1 - Hierarchical Federated Learning with Privacy
AU - Chandrasekaran, Varun
AU - Banerjee, Suman
AU - Perino, Diego
AU - Kourtellis, Nicolas
N1 - This research was partially supported by the Spanish Ministry of Economic Affairs and Digital Transformation under agreement TSI-063000-2021-63 (MAP-6G), and the EU Horizon 2020 program under grant agreement 101070473 (FLUIDOS). The views and opinions expressed are those of the authors only and do not necessarily reflect those of the funding agencies.
PY - 2024
Y1 - 2024
N2 - Recent work highlights how gradient-level access can lead to successful inference and reconstruction attacks against federated learning (FL). In such settings, differentially private (DP) learning is known to provide resilience. However, approaches used in the status quo (i.e., central and local DP) introduce disparate utility vs. privacy trade-offs. In this work, we mitigate such trade-offs through hierarchical FL (HFL). For the first time, we demonstrate that by the introduction of a new intermediary level where calibrated noise can be added, better trade-offs can be obtained; we term this hierarchical DP (HDP). Our experiments with 3 different datasets (commonly used as benchmarks for FL in prior works) suggest that HDP produces models as accurate as those obtained using central DP, where noise is added at a central aggregator at a lower privacy budget.
AB - Recent work highlights how gradient-level access can lead to successful inference and reconstruction attacks against federated learning (FL). In such settings, differentially private (DP) learning is known to provide resilience. However, approaches used in the status quo (i.e., central and local DP) introduce disparate utility vs. privacy trade-offs. In this work, we mitigate such trade-offs through hierarchical FL (HFL). For the first time, we demonstrate that by the introduction of a new intermediary level where calibrated noise can be added, better trade-offs can be obtained; we term this hierarchical DP (HDP). Our experiments with 3 different datasets (commonly used as benchmarks for FL in prior works) suggest that HDP produces models as accurate as those obtained using central DP, where noise is added at a central aggregator at a lower privacy budget.
UR - http://www.scopus.com/inward/record.url?scp=85218040011&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85218040011&partnerID=8YFLogxK
U2 - 10.1109/BigData62323.2024.10826023
DO - 10.1109/BigData62323.2024.10826023
M3 - Conference contribution
AN - SCOPUS:85218040011
T3 - Proceedings - 2024 IEEE International Conference on Big Data, BigData 2024
SP - 1516
EP - 1525
BT - Proceedings - 2024 IEEE International Conference on Big Data, BigData 2024
A2 - Ding, Wei
A2 - Lu, Chang-Tien
A2 - Wang, Fusheng
A2 - Di, Liping
A2 - Wu, Kesheng
A2 - Huan, Jun
A2 - Nambiar, Raghu
A2 - Li, Jundong
A2 - Ilievski, Filip
A2 - Baeza-Yates, Ricardo
A2 - Hu, Xiaohua
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Conference on Big Data, BigData 2024
Y2 - 15 December 2024 through 18 December 2024
ER -