TY - JOUR

T1 - Decentralized multi-task stochastic optimization with compressed communications

AU - Singh, Navjot

AU - Cao, Xuanyu

AU - Diggavi, Suhas

AU - Başar, Tamer

N1 - Publisher Copyright:
© 2023 Elsevier Ltd

PY - 2024/1

Y1 - 2024/1

N2 - We consider a multi-agent network where each node has a stochastic (local) cost function that depends on the decision variable of that node and a random variable, and further, the decision variables of neighboring nodes are pairwise constrained. There is an aggregated objective function for the network, composed additively of the expected values of the local cost functions at the nodes, and the overall goal of the network is to obtain the minimizing solution to this aggregate objective function subject to all the pairwise constraints. This is to be achieved at the level of the nodes using decentralized information and local computation, with exchanges of only compressed information allowed by neighboring nodes. The paper develops algorithms and obtains performance bounds for two different models of local information availability at the nodes: (i) sample feedback, where each node has direct access to samples of the local random variable to evaluate its local cost, and (ii) bandit feedback, where samples of the random variables are not available, but only the values of the local cost functions at two random points close to the decision are available to each node. For both models, with compressed communication between neighbors, we have developed decentralized saddle-point algorithms that deliver performances no different (in order sense) from those without communication compression; specifically, we show that deviation from the global minimum value and violations of the constraints are upper-bounded by [Formula presented] and [Formula presented], respectively, where T is the number of iterations. Numerical examples provided in the paper corroborate these bounds.

AB - We consider a multi-agent network where each node has a stochastic (local) cost function that depends on the decision variable of that node and a random variable, and further, the decision variables of neighboring nodes are pairwise constrained. There is an aggregated objective function for the network, composed additively of the expected values of the local cost functions at the nodes, and the overall goal of the network is to obtain the minimizing solution to this aggregate objective function subject to all the pairwise constraints. This is to be achieved at the level of the nodes using decentralized information and local computation, with exchanges of only compressed information allowed by neighboring nodes. The paper develops algorithms and obtains performance bounds for two different models of local information availability at the nodes: (i) sample feedback, where each node has direct access to samples of the local random variable to evaluate its local cost, and (ii) bandit feedback, where samples of the random variables are not available, but only the values of the local cost functions at two random points close to the decision are available to each node. For both models, with compressed communication between neighbors, we have developed decentralized saddle-point algorithms that deliver performances no different (in order sense) from those without communication compression; specifically, we show that deviation from the global minimum value and violations of the constraints are upper-bounded by [Formula presented] and [Formula presented], respectively, where T is the number of iterations. Numerical examples provided in the paper corroborate these bounds.

KW - Bandit feedback

KW - Compression

KW - Decentralized stochastic optimization

KW - Saddle-point algorithm

KW - Sample feedback

UR - http://www.scopus.com/inward/record.url?scp=85175183392&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85175183392&partnerID=8YFLogxK

U2 - 10.1016/j.automatica.2023.111363

DO - 10.1016/j.automatica.2023.111363

M3 - Article

AN - SCOPUS:85175183392

SN - 0005-1098

VL - 159

JO - Automatica

JF - Automatica

M1 - 111363

ER -