Robust Multi-Agent Bandits Over Undirected Graphs

Daniel Vial, Sanjay Shakkottai, R. Srikant

Research output: Contribution to journalArticlepeer-review

Abstract

We consider a multi-agent multi-armed bandit setting in which n honest agents collaborate over a network to minimize regret but m malicious agents can disrupt learning arbitrarily. Assuming the network is the complete graph, existing algorithms incur O((m + K/n) łog (T) / Δ) regret in this setting, where K is the number of arms and Δis the arm gap. For m łl K, this improves over the single-agent baseline regret of O(Kłog(T)/Δ). In this work, we show the situation is murkier beyond the case of a complete graph. In particular, we prove that if the state-of-the-art algorithm is used on the undirected line graph, honest agents can suffer (nearly) linear regret until time is doubly exponential in K and n. In light of this negative result, we propose a new algorithm for which the i-th agent has regret O(( dmal (i) + K/n) łog(T)/Δ) on any connected and undirected graph, where dmal(i) is the number of i's neighbors who are malicious. Thus, we generalize existing regret bounds beyond the complete graph (where dmal(i) = m), and show the effect of malicious agents is entirely local (in the sense that only the dmal (i) malicious agents directly connected to i affect its long-term regret).

Original languageEnglish (US)
Article number53
JournalProceedings of the ACM on Measurement and Analysis of Computing Systems
Volume6
Issue number3
DOIs
StatePublished - Dec 8 2022

Keywords

  • malicious agents
  • multi-armed bandits

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Safety, Risk, Reliability and Quality
  • Hardware and Architecture
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Robust Multi-Agent Bandits Over Undirected Graphs'. Together they form a unique fingerprint.

Cite this