TY - GEN
T1 - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM
AU - Xie, Chulin
AU - Chen, Pin Yu
AU - Li, Qinbin
AU - Nourian, Arash
AU - Zhang, Ce
AU - Li, Bo
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated learning (FL) enables distributed resource- constrained devices to jointly train shared models while keeping the training data local for privacy purposes. Vertical FL (VFL), which allows each client to collect partial features, has attracted intensive research efforts recently. We identified the main challenges that existing VFL frameworks are facing: the server needs to communicate gradients with the clients for each training step, incurring high communication cost that leads to rapid consumption of privacy budgets. To address these challenges, in this paper, we introduce a VFL framework with multiple heads (VIM ), which takes the separate contribution of each client into account, and enables an efficient decomposition of the VFL optimization objective to sub-objectives that can be iteratively tackled by the server and the clients on their own. In particular, we propose an Alternating Direction Method of Multipliers (ADMM)- based method to solve our optimization problem, which allows clients to conduct multiple local updates before communication, and thus reduces the communication cost and leads to better performance under differential privacy (DP). We provide the client-level DP mechanism for our framework to protect user privacy. Moreover, we show that a byproduct of VIM is that the weights of learned heads reflect the importance of local clients. We conduct extensive evaluations and show that on four vertical FL datasets, VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art. We also explicitly evaluate the importance of local clients and show that VIM enables functionalities such as client-level explanation and client denoising. We hope this work will shed light on a new way of effective VFL training and understanding.
AB - Federated learning (FL) enables distributed resource- constrained devices to jointly train shared models while keeping the training data local for privacy purposes. Vertical FL (VFL), which allows each client to collect partial features, has attracted intensive research efforts recently. We identified the main challenges that existing VFL frameworks are facing: the server needs to communicate gradients with the clients for each training step, incurring high communication cost that leads to rapid consumption of privacy budgets. To address these challenges, in this paper, we introduce a VFL framework with multiple heads (VIM ), which takes the separate contribution of each client into account, and enables an efficient decomposition of the VFL optimization objective to sub-objectives that can be iteratively tackled by the server and the clients on their own. In particular, we propose an Alternating Direction Method of Multipliers (ADMM)- based method to solve our optimization problem, which allows clients to conduct multiple local updates before communication, and thus reduces the communication cost and leads to better performance under differential privacy (DP). We provide the client-level DP mechanism for our framework to protect user privacy. Moreover, we show that a byproduct of VIM is that the weights of learned heads reflect the importance of local clients. We conduct extensive evaluations and show that on four vertical FL datasets, VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art. We also explicitly evaluate the importance of local clients and show that VIM enables functionalities such as client-level explanation and client denoising. We hope this work will shed light on a new way of effective VFL training and understanding.
KW - ADMM
KW - Communication-Efficiency
KW - Differential Privacy
KW - Vertical Federated Learning
UR - http://www.scopus.com/inward/record.url?scp=85193779533&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85193779533&partnerID=8YFLogxK
U2 - 10.1109/SaTML59370.2024.00029
DO - 10.1109/SaTML59370.2024.00029
M3 - Conference contribution
AN - SCOPUS:85193779533
T3 - Proceedings - IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024
SP - 443
EP - 471
BT - Proceedings - IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE Conference on Safe and Trustworthy Machine Learning, SaTML 2024
Y2 - 9 April 2024 through 11 April 2024
ER -