Abstract
We propose a framework for multiagent systems in which the agents compute their control actions in real time, based on local information only. The novelty of the proposed framework is that the process of computing a suboptimal control action is divided into two phases: An offline phase and an online phase. In the offline phase, an approximate problem is formulated with a cost function that is close to the optimal cost in some sense and is distributed, that is, the costs of non-neighboring nodes are not coupled. This phase is centralized and is completed before the deployment of the system. In the online phase, the approximate problem is solved in real time by implementing any efficient distributed optimization algorithm. To quantify the performance loss, we derive upper bounds for the maximum error between the optimal performance and the performance under the proposed framework. Finally, the proposed framework is applied to an example setup in which a team of mobile nodes is assigned the task of establishing a communication link between two base stations with minimum energy consumption. We show through simulations that the performance under the proposed framework is close to the optimal performance, and the suboptimal policy can be efficiently implemented online.
Original language | English (US) |
---|---|
Article number | 8046098 |
Pages (from-to) | 1717-1728 |
Number of pages | 12 |
Journal | IEEE Transactions on Control of Network Systems |
Volume | 5 |
Issue number | 4 |
DOIs | |
State | Published - Dec 2018 |
Externally published | Yes |
Keywords
- Distributed optimization approximate dynamic programming
- linear quadratic regulator (LQR)
- real-time systems
ASJC Scopus subject areas
- Control and Systems Engineering
- Signal Processing
- Computer Networks and Communications
- Control and Optimization