Abstract
Due to the iterative information exchange between agents, decentralized multi-agent optimization algorithms often incur large communication overhead, which is not affordable in many practical systems with scarce resources. To address this communication bottleneck, we compress the exchanged information between neighboring agents to solve decentralized online convex optimization problem, where each agent has a time-varying local loss function and the goal is to minimize the accumulated global loss by choosing actions sequentially. We develop a decentralized online gradient descent algorithm based on compressed communications, where the compression operators can reduce the amount of data to be sent significantly. Error-compensation technique is utilized to mitigate the error accumulation caused by the communication compression. We further analyze the performance of the proposed algorithm. It is shown that the regret is bounded from above by O(T) (T is the time horizon), which is the same as (in order sense) the regret bound for vanilla decentralized online gradient descent with perfect communication. Thus, the proposed algorithm is capable of reducing the communication overhead remarkably without much degradation of the optimization performance. This is further corroborated by numerical experiments, where compressed communication reduces the number of transmitted bits by an order of magnitude without compromising the optimization performance.
Original language | English (US) |
---|---|
Article number | 111186 |
Journal | Automatica |
Volume | 156 |
DOIs | |
State | Published - Oct 2023 |
Externally published | Yes |
Keywords
- Communication efficiency
- Compressed communications
- Decentralized optimization
- Error compensation
- Online convex optimization
ASJC Scopus subject areas
- Control and Systems Engineering
- Electrical and Electronic Engineering