Abstract
This work explores the distributed state estimation problem for an uncertain, nonlinear, and continuous-time system. Given a sensor network, each agent is assigned a deep neural network (DNN) that is used to approximate the system's dynamics. Each agent updates the weights of their DNN through a multiple timescale approach, i.e., the outer layer weights are updated online with a Lyapunov-based gradient descent update law, and the inner layer weights are updated concurrently using a supervised learning strategy. To promote the efficient use of network resources, the distributed observer uses event-triggered communication. A nonsmooth Lyapunov analysis demonstrates that the distributed event-triggered observer achieves uniformly ultimately bounded state reconstruction. A simulation example of a five-agent sensor network estimating the state of a two-link robotic manipulator tracking a desired trajectory is provided to validate the result and showcase the performance improvements afforded by DNNs.
Original language | English (US) |
---|---|
Pages (from-to) | 3107-3114 |
Number of pages | 8 |
Journal | IEEE Transactions on Automatic Control |
Volume | 68 |
Issue number | 5 |
DOIs | |
State | Published - May 1 2023 |
Keywords
- Lyapunov methods
- Multi-agent systems
- deep learning
- nonlinear control systems
- state estimation
- wireless sensor networks
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering