TY - GEN
T1 - Distributed discrete-time optimization by exchanging one bit of information
AU - Zhang, Jiaqi
AU - You, Keyou
AU - Başar, Tamer
N1 - Publisher Copyright:
© 2018 AACC.
PY - 2018/8/9
Y1 - 2018/8/9
N2 - This paper proposes a distributed discrete-time algorithm to solve an additive cost optimization problem over undirected deterministic or time-varying graphs. Different from most previous methods that require to exchange exact states between nodes, each node in our algorithm needs only the sign of the relative state between its neighbors, which is clearly one bit of information. Our analysis is based on optimization theory rather than Lyapunov theory or algebraic graph theory. The latter is commonly used in existing literature, especially in the continuous-time algorithm design, and is difficult to apply in our case. Besides, an optimization-theory-based analysis may make our results more extendible. In particular, our convergence proofs are based on the convergences of the subgradient method and the stochastic subgradient method. Moreover, the convergence rate of our algorithm can vary from O(1/\ln(k)) to O(1/√k), depending on the choice of the stepsize. A quantile regression problem is included to illustrate the performance of our algorithm using simulations.
AB - This paper proposes a distributed discrete-time algorithm to solve an additive cost optimization problem over undirected deterministic or time-varying graphs. Different from most previous methods that require to exchange exact states between nodes, each node in our algorithm needs only the sign of the relative state between its neighbors, which is clearly one bit of information. Our analysis is based on optimization theory rather than Lyapunov theory or algebraic graph theory. The latter is commonly used in existing literature, especially in the continuous-time algorithm design, and is difficult to apply in our case. Besides, an optimization-theory-based analysis may make our results more extendible. In particular, our convergence proofs are based on the convergences of the subgradient method and the stochastic subgradient method. Moreover, the convergence rate of our algorithm can vary from O(1/\ln(k)) to O(1/√k), depending on the choice of the stepsize. A quantile regression problem is included to illustrate the performance of our algorithm using simulations.
UR - http://www.scopus.com/inward/record.url?scp=85052585532&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052585532&partnerID=8YFLogxK
U2 - 10.23919/ACC.2018.8431279
DO - 10.23919/ACC.2018.8431279
M3 - Conference contribution
AN - SCOPUS:85052585532
SN - 9781538654286
T3 - Proceedings of the American Control Conference
SP - 2065
EP - 2070
BT - 2018 Annual American Control Conference, ACC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 Annual American Control Conference, ACC 2018
Y2 - 27 June 2018 through 29 June 2018
ER -