Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems

Zhan Zhang, Yegin Genc, Dakuo Wang, Mehmet Eren Ahsen, Xiangmin Fan

Research output: Contribution to journalArticlepeer-review

Abstract

Ongoing research efforts have been examining how to utilize artificial intelligence technology to help healthcare consumers make sense of their clinical data, such as diagnostic radiology reports. How to promote the acceptance of such novel technology is a heated research topic. Recent studies highlight the importance of providing local explanations about AI prediction and model performance to help users determine whether to trust AI’s predictions. Despite some efforts, limited empirical research has been conducted to quantitatively measure how AI explanations impact healthcare consumers’ perceptions of using patient-facing, AI-powered healthcare systems. The aim of this study is to evaluate the effects of different AI explanations on people's perceptions of AI-powered healthcare system. In this work, we designed and deployed a large-scale experiment (N = 3,423) on Amazon Mechanical Turk (MTurk) to evaluate the effects of AI explanations on people's perceptions in the context of comprehending radiology reports. We created four groups based on two factors—the extent of explanations for the prediction (High vs. Low Transparency) and the model performance (Good vs. Weak AI Model)—and randomly assigned participants to one of the four conditions. Participants were instructed to classify a radiology report as describing a normal or abnormal finding, followed by completing a post-study survey to indicate their perceptions of the AI tool. We found that revealing model performance information can promote people's trust and perceived usefulness of system outputs, while providing local explanations for the rationale of a prediction can promote understandability but not necessarily trust. We also found that when model performance is low, the more information the AI system discloses, the less people would trust the system. Lastly, whether human agrees with AI predictions or not and whether the AI prediction is correct or not could also influence the effect of AI explanations. We conclude this paper by discussing implications for designing AI systems for healthcare consumers to interpret diagnostic report.

Original languageEnglish (US)
Article number64
JournalJournal of Medical Systems
Volume45
Issue number6
DOIs
StatePublished - Jun 2021

Keywords

  • Artificial intelligence
  • Decision making
  • Diagnostic results
  • Healthcare
  • Radiology report
  • Trust

ASJC Scopus subject areas

  • Medicine (miscellaneous)
  • Information Systems
  • Health Informatics
  • Health Information Management

Fingerprint

Dive into the research topics of 'Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems'. Together they form a unique fingerprint.

Cite this