TY - JOUR
T1 - To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems
AU - Amann, Julia
AU - Vetter, Dennis
AU - Blomberg, Stig Nikolaj
AU - Christensen, Helle Collatz
AU - Coffee, Megan
AU - Gerke, Sara
AU - Gilbert, Thomas K.
AU - Hagendorff, Thilo
AU - Holm, Sune
AU - Livne, Michelle
AU - Spezzatti, Andy
AU - Strümke, Inga
AU - Zicari, Roberto V.
AU - Madai, Vince Istvan
AU - Z-Inspection initiative
PY - 2022/2/1
Y1 - 2022/2/1
N2 - Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice.
AB - Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice.
UR - http://www.scopus.com/inward/record.url?scp=85142529836&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85142529836&partnerID=8YFLogxK
U2 - 10.1371/journal.pdig.0000016
DO - 10.1371/journal.pdig.0000016
M3 - Article
SN - 2767-3170
VL - 1
SP - e0000016
JO - PLOS Digital Health
JF - PLOS Digital Health
IS - 2
M1 - e0000016
ER -