TY - JOUR
T1 - Decentralized nonconvex optimization with guaranteed privacy and accuracy
AU - Wang, Yongqiang
AU - Başar, Tamer
N1 - The work of Yongqiang Wang was supported in part by the National Science Foundation under Grants ECCS-1912702, CCF-2106293, CCF-2215088, and CNS-2219487. The work of Tamer Ba\u015Far was supported in part by the ONR MURI Grant N00014-16-1- 2710 and in part by the Army Research Laboratory, United States, under Cooperative Agreement W911NF-17-2-0196. The material in this paper was not presented at any conference. This paper was recommended for publication in revised form by Associate Editor Sergio Grammatico under the direction of Editor Ian R. Petersen.
PY - 2023/4
Y1 - 2023/4
N2 - Privacy protection and nonconvexity are two challenging problems in decentralized optimization and learning involving sensitive data. Despite some recent advances addressing each of the two problems separately, no results have been reported that have theoretical guarantees on both privacy protection and saddle/maximum avoidance in decentralized nonconvex optimization. We propose a new algorithm for decentralized nonconvex optimization that can enable both rigorous differential privacy and saddle/maximum avoiding performance. The new algorithm allows the incorporation of persistent additive noise to enable rigorous differential privacy for data samples, gradients, and intermediate optimization variables without losing provable convergence, and thus circumventing the dilemma of trading accuracy for privacy in differential privacy design. More interestingly, the algorithm is theoretically proven to be able to efficiently guarantee accuracy by avoiding convergence to local maxima and saddle points, which has not been reported before in the literature on decentralized nonconvex optimization. The algorithm is efficient in both communication (it only shares one variable in each iteration) and computation (it is encryption-free), and hence is promising for large-scale nonconvex optimization and learning involving high-dimensional optimization parameters. Numerical experiments for both a decentralized estimation problem and an Independent Component Analysis (ICA) problem confirm the effectiveness of the proposed approach.
AB - Privacy protection and nonconvexity are two challenging problems in decentralized optimization and learning involving sensitive data. Despite some recent advances addressing each of the two problems separately, no results have been reported that have theoretical guarantees on both privacy protection and saddle/maximum avoidance in decentralized nonconvex optimization. We propose a new algorithm for decentralized nonconvex optimization that can enable both rigorous differential privacy and saddle/maximum avoiding performance. The new algorithm allows the incorporation of persistent additive noise to enable rigorous differential privacy for data samples, gradients, and intermediate optimization variables without losing provable convergence, and thus circumventing the dilemma of trading accuracy for privacy in differential privacy design. More interestingly, the algorithm is theoretically proven to be able to efficiently guarantee accuracy by avoiding convergence to local maxima and saddle points, which has not been reported before in the literature on decentralized nonconvex optimization. The algorithm is efficient in both communication (it only shares one variable in each iteration) and computation (it is encryption-free), and hence is promising for large-scale nonconvex optimization and learning involving high-dimensional optimization parameters. Numerical experiments for both a decentralized estimation problem and an Independent Component Analysis (ICA) problem confirm the effectiveness of the proposed approach.
KW - Distributed optimization
KW - Nonconvex optimization
KW - Privacy
KW - Saddle avoidance
UR - http://www.scopus.com/inward/record.url?scp=85146921923&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85146921923&partnerID=8YFLogxK
U2 - 10.1016/j.automatica.2023.110858
DO - 10.1016/j.automatica.2023.110858
M3 - Article
AN - SCOPUS:85146921923
SN - 0005-1098
VL - 150
JO - Automatica
JF - Automatica
M1 - 110858
ER -