Decentralized Federated Learning for Over-Parameterized Models

Tiancheng Qin, S. Rasoul Etesami, Cesar A. Uribe

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Modern machine learning, especially deep learning, features models that are often highly expressive and over-parameterized. They can interpolate the data by driving the empirical loss close to zero. We analyze the convergence rate of decentralized stochastic gradient descent (SGD), which is at the core of decentralized federated learning (DFL), for these over-parameterized models. Our analysis covers the setting of decentralized SGD with time-varying networks, local updates and heterogeneous data. We establish strong convergence guarantees with or without the assumption of convex objectives that either improves upon the existing literature or is the first for the regime.

Original languageEnglish (US)
Title of host publication2022 IEEE 61st Conference on Decision and Control, CDC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5200-5205
Number of pages6
ISBN (Electronic)9781665467612
DOIs
StatePublished - 2022
Externally publishedYes
Event61st IEEE Conference on Decision and Control, CDC 2022 - Cancun, Mexico
Duration: Dec 6 2022Dec 9 2022

Publication series

NameProceedings of the IEEE Conference on Decision and Control
Volume2022-December
ISSN (Print)0743-1546
ISSN (Electronic)2576-2370

Conference

Conference61st IEEE Conference on Decision and Control, CDC 2022
Country/TerritoryMexico
CityCancun
Period12/6/2212/9/22

Keywords

  • Decentralized Federated Learning
  • Decentralized Optimization
  • Local SGD
  • Overparameterization

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Decentralized Federated Learning for Over-Parameterized Models'. Together they form a unique fingerprint.

Cite this