TY - JOUR
T1 - Beware explanations from AI in health care
AU - Babic, Boris
AU - Gerke, Sara
AU - Evgeniou, Theodoros
AU - Glenn Cohen, I.
N1 - eW thankS.aW chter for feedbackon an earlier version of this manuscript.All authors contributed equallyto the analysis and draftingof thepaper.Funding:S..G and I..GC.weresup-ported bya grant from theCollaborativeeR search Program forBiomedical InnovationLaw,a scientificallyindependent collaborative research program supported bya Novo Nordisk Foundation grant (NNF17SA200 7748 ).I..GC.was also supported byDiagnosingin theHome: The Ethical,Legal,and eR gulatoryChallenges andOpportunitiesof Digital Home Health,a grant fromthe Gordon and BettyMoore Foundation (grantagreement number 799 4). Competing interests: S..G is a memberof theAdvisoryGroupA– cademic of the American Boardof Artificial Intelligence in Medicine.I..G C. servesas a bioethics consultantforOtsuka on their Abilify MyCite product.I..G C.isa member of theIllumina ethics advi-soryboard.I..G C.serves as an ethics consultant forDawnlight. The authors declare no other competing interests.
PY - 2021/7/16
Y1 - 2021/7/16
N2 - Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users' skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.
AB - Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users' skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.
UR - https://www.scopus.com/pages/publications/85110525466
UR - https://www.scopus.com/inward/citedby.url?scp=85110525466&partnerID=8YFLogxK
U2 - 10.1126/science.abg1834
DO - 10.1126/science.abg1834
M3 - Article
C2 - 34437144
AN - SCOPUS:85110525466
SN - 0036-8075
VL - 373
SP - 284
EP - 286
JO - Science
JF - Science
IS - 6552
ER -