TY - JOUR
T1 - Decentralized online convex optimization with compressed communications
AU - Cao, Xuanyu
AU - Başar, Tamer
N1 - Research supported by the National Natural Science Foundation of China under Grant 62203373 , and the US Army Research Laboratory under Cooperative Agreement Number W911NF-17-2-0196 . The material in this paper was not presented at any conference. This paper was recommended for publication in revised form by Associate Editor Sergio Grammatico under the direction of Editor Ian R. Petersen.
PY - 2023/10
Y1 - 2023/10
N2 - Due to the iterative information exchange between agents, decentralized multi-agent optimization algorithms often incur large communication overhead, which is not affordable in many practical systems with scarce resources. To address this communication bottleneck, we compress the exchanged information between neighboring agents to solve decentralized online convex optimization problem, where each agent has a time-varying local loss function and the goal is to minimize the accumulated global loss by choosing actions sequentially. We develop a decentralized online gradient descent algorithm based on compressed communications, where the compression operators can reduce the amount of data to be sent significantly. Error-compensation technique is utilized to mitigate the error accumulation caused by the communication compression. We further analyze the performance of the proposed algorithm. It is shown that the regret is bounded from above by O(T) (T is the time horizon), which is the same as (in order sense) the regret bound for vanilla decentralized online gradient descent with perfect communication. Thus, the proposed algorithm is capable of reducing the communication overhead remarkably without much degradation of the optimization performance. This is further corroborated by numerical experiments, where compressed communication reduces the number of transmitted bits by an order of magnitude without compromising the optimization performance.
AB - Due to the iterative information exchange between agents, decentralized multi-agent optimization algorithms often incur large communication overhead, which is not affordable in many practical systems with scarce resources. To address this communication bottleneck, we compress the exchanged information between neighboring agents to solve decentralized online convex optimization problem, where each agent has a time-varying local loss function and the goal is to minimize the accumulated global loss by choosing actions sequentially. We develop a decentralized online gradient descent algorithm based on compressed communications, where the compression operators can reduce the amount of data to be sent significantly. Error-compensation technique is utilized to mitigate the error accumulation caused by the communication compression. We further analyze the performance of the proposed algorithm. It is shown that the regret is bounded from above by O(T) (T is the time horizon), which is the same as (in order sense) the regret bound for vanilla decentralized online gradient descent with perfect communication. Thus, the proposed algorithm is capable of reducing the communication overhead remarkably without much degradation of the optimization performance. This is further corroborated by numerical experiments, where compressed communication reduces the number of transmitted bits by an order of magnitude without compromising the optimization performance.
KW - Communication efficiency
KW - Compressed communications
KW - Decentralized optimization
KW - Error compensation
KW - Online convex optimization
UR - http://www.scopus.com/inward/record.url?scp=85165950362&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85165950362&partnerID=8YFLogxK
U2 - 10.1016/j.automatica.2023.111186
DO - 10.1016/j.automatica.2023.111186
M3 - Article
AN - SCOPUS:85165950362
SN - 0005-1098
VL - 156
JO - Automatica
JF - Automatica
M1 - 111186
ER -