Abstract

Consider a social learning problem in a parallel network, where N distributed agents make independent selfish binary decisions, and a central agent aggregates them together with a private signal to make a final decision. In particular, all agents have private beliefs for the true prior, based on which they perform binary hypothesis testing. We focus on the Bayes risk of the central agent, and counterintuitively find that a collection of agents with incorrect beliefs could outperform a set of agents with correct beliefs. We also consider many-agent asymptotics (i.e., N is large) when distributed agents all have identical beliefs, for which it is found that the central agent's decision is polarized and beliefs determine the limit value of the central agent's risk. Moreover, it is surprising that when all agents believe a certain prior-agnostic constant belief, it achieves globally optimal risk as N→∞.

Original languageEnglish (US)
Title of host publication2020 IEEE International Symposium on Information Theory, ISIT 2020 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1265-1270
Number of pages6
ISBN (Electronic)9781728164328
DOIs
StatePublished - Jun 2020
Event2020 IEEE International Symposium on Information Theory, ISIT 2020 - Los Angeles, United States
Duration: Jul 21 2020Jul 26 2020

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
Volume2020-June
ISSN (Print)2157-8095

Conference

Conference2020 IEEE International Symposium on Information Theory, ISIT 2020
Country/TerritoryUnited States
CityLos Angeles
Period7/21/207/26/20

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Information Systems
  • Modeling and Simulation
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Social Learning with Beliefs in a Parallel Network'. Together they form a unique fingerprint.

Cite this