In this paper, we present results on the convergence and asymptotic agreement of a class of asynchronous stochastic distributed algorithms which are in general time-varying, memory-dependent, and not necessarily associated with the optimization of a common cost functional. We show that convergence and agreement can be reached by distributed learning and computation under a number of conditions, in which case a separation of fast and slow parts of the algorithm is possible, leading to a separation of the estimation part from the main algorithm.
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering