TY - JOUR
T1 - BOBA
T2 - 27th International Conference on Artificial Intelligence and Statistics, AISTATS 2024
AU - Bao, Wenxuan
AU - Wu, Jun
AU - He, Jingrui
N1 - This work is supported by National Science Foundation under Award No. IIS-1947203, IIS-2117902, and the U.S. Department of Homeland Security under Grant Award Number, 17STQAC00001-06-00. The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government.
PY - 2024
Y1 - 2024
N2 - In federated learning, most existing robust aggregation rules (AGRs) combat Byzantine attacks in the IID setting, where client data is assumed to be independent and identically distributed. In this paper, we address label skewness, a more realistic and challenging non-IID setting, where each client only has access to a few classes of data. In this setting, state-of-the-art AGRs suffer from selection bias, leading to significant performance drop for particular classes; they are also more vulnerable to Byzantine attacks due to the increased variation among gradients of honest clients. To address these limitations, we propose an efficient two-stage method named BOBA. Theoretically, we prove the convergence of BOBA with an error of the optimal order. Our empirical evaluations demonstrate BOBA’s superior unbiasedness and robustness across diverse models and datasets when compared to various baselines. Our code is available at https://github.com/baowenxuan/BOBA.
AB - In federated learning, most existing robust aggregation rules (AGRs) combat Byzantine attacks in the IID setting, where client data is assumed to be independent and identically distributed. In this paper, we address label skewness, a more realistic and challenging non-IID setting, where each client only has access to a few classes of data. In this setting, state-of-the-art AGRs suffer from selection bias, leading to significant performance drop for particular classes; they are also more vulnerable to Byzantine attacks due to the increased variation among gradients of honest clients. To address these limitations, we propose an efficient two-stage method named BOBA. Theoretically, we prove the convergence of BOBA with an error of the optimal order. Our empirical evaluations demonstrate BOBA’s superior unbiasedness and robustness across diverse models and datasets when compared to various baselines. Our code is available at https://github.com/baowenxuan/BOBA.
UR - http://www.scopus.com/inward/record.url?scp=85194164677&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85194164677&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85194164677
SN - 2640-3498
VL - 238
SP - 892
EP - 900
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 2 May 2024 through 4 May 2024
ER -