Abstract
Human-AI collaboration is an increasingly commonplace part of decision-making in real world applications. However, how humans behave when collaborating with AI is not well understood. We develop metacognitive bandits, a computational model of a human’s advice-seeking behavior when working with an AI. The model describes a person’s metacognitive process of deciding when to rely on their own judgment and when to solicit the advice of the AI. It also accounts for the difficulty of each trial in making the decision to solicit advice. We illustrate that the metacognitive bandit makes decisions similar to humans in a behavioral experiment. We also demonstrate that algorithm aversion, a widely reported bias, can be explained as the result of a quasi-optimal sequential decision-making process. Our model does not need to assume any prior biases towards AI to produce this behavior.
Original language | English (US) |
---|---|
Pages | 2780-2786 |
Number of pages | 7 |
State | Published - 2021 |
Event | 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021 - Virtual, Online, Austria Duration: Jul 26 2021 → Jul 29 2021 |
Conference
Conference | 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021 |
---|---|
Country/Territory | Austria |
City | Virtual, Online |
Period | 7/26/21 → 7/29/21 |
Keywords
- Algorithm aversion
- Bandit problems
- Bayesian modeling, Metacognition
- Cognitive modelling
- Human-AI interaction
ASJC Scopus subject areas
- Cognitive Neuroscience
- Artificial Intelligence
- Computer Science Applications
- Human-Computer Interaction