TY - JOUR
T1 - Fair Federated Learning via the Proportional Veto Core
AU - Chaudhury, Bhaskar Ray
AU - Murhekar, Aniket
AU - Yuan, Zhuowen
AU - Li, Bo
AU - Mehta, Ruta
AU - Procaccia, Ariel D.
N1 - This work is partially supported by the National Science Foundation under grant No. 2334461, No. 1910100, No. 2046726, No. 2229876, DARPA GARD, the National Aeronautics and Space Administration (NASA) under grant no. 80NSSC20M0229, the Alfred P. Sloan Fellowship, and the Amazon research award.
PY - 2024
Y1 - 2024
N2 - Previous work on fairness in federated learning introduced the notion of core stability, which provides utility-based fairness guarantees to any subset of participating agents. However, these guarantees require strong assumptions on agent utilities that render them impractical. To address this shortcoming, we measure the quality of output models in terms of their ordinal rank instead of their cardinal utility, and use this insight to adapt the classical notion of proportional veto core (PVC) from social choice theory to the federated learning setting. We prove that models that are PVC-stable exist in very general learning paradigms, even allowing non-convex model sets, as well as non-convex and non-concave loss functions. We also design Rank-Core-Fed, a distributed federated learning algorithm, to train a PVC-stable model. Finally, we demonstrate that Rank-CoreFed outperforms baselines in terms of fairness on different datasets.
AB - Previous work on fairness in federated learning introduced the notion of core stability, which provides utility-based fairness guarantees to any subset of participating agents. However, these guarantees require strong assumptions on agent utilities that render them impractical. To address this shortcoming, we measure the quality of output models in terms of their ordinal rank instead of their cardinal utility, and use this insight to adapt the classical notion of proportional veto core (PVC) from social choice theory to the federated learning setting. We prove that models that are PVC-stable exist in very general learning paradigms, even allowing non-convex model sets, as well as non-convex and non-concave loss functions. We also design Rank-Core-Fed, a distributed federated learning algorithm, to train a PVC-stable model. Finally, we demonstrate that Rank-CoreFed outperforms baselines in terms of fairness on different datasets.
UR - http://www.scopus.com/inward/record.url?scp=85203846590&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85203846590&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85203846590
SN - 2640-3498
VL - 235
SP - 42245
EP - 42257
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 41st International Conference on Machine Learning, ICML 2024
Y2 - 21 July 2024 through 27 July 2024
ER -