Explaining Algorithm Aversion with Metacognitive Bandits

Aakriti Kumar, Trisha Patel, Aaron S. Benjamin, Mark Steyvers

Research output: Contribution to conferencePaperpeer-review

Abstract

Human-AI collaboration is an increasingly commonplace part of decision-making in real world applications. However, how humans behave when collaborating with AI is not well understood. We develop metacognitive bandits, a computational model of a human’s advice-seeking behavior when working with an AI. The model describes a person’s metacognitive process of deciding when to rely on their own judgment and when to solicit the advice of the AI. It also accounts for the difficulty of each trial in making the decision to solicit advice. We illustrate that the metacognitive bandit makes decisions similar to humans in a behavioral experiment. We also demonstrate that algorithm aversion, a widely reported bias, can be explained as the result of a quasi-optimal sequential decision-making process. Our model does not need to assume any prior biases towards AI to produce this behavior.

Original languageEnglish (US)
Pages2780-2786
Number of pages7
StatePublished - 2021
Event43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021 - Virtual, Online, Austria
Duration: Jul 26 2021Jul 29 2021

Conference

Conference43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021
Country/TerritoryAustria
CityVirtual, Online
Period7/26/217/29/21

Keywords

  • Algorithm aversion
  • Bandit problems
  • Bayesian modeling, Metacognition
  • Cognitive modelling
  • Human-AI interaction

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Artificial Intelligence
  • Computer Science Applications
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Explaining Algorithm Aversion with Metacognitive Bandits'. Together they form a unique fingerprint.

Cite this