TY - JOUR
T1 - Toward fairness in artificial intelligence for medical image analysis
T2 - Identification and mitigation of potential biases in the roadmap from data collection to model deployment
AU - Drukker, Karen
AU - Chen, Weijie
AU - Gichoya, Judy
AU - Gruszauskas, Nicholas
AU - Kalpathy-Cramer, Jayashree
AU - Koyejo, Sanmi
AU - Myers, Kyle
AU - Sá, Rui C.
AU - Sahiner, Berkman
AU - Whitney, Heather
AU - Zhang, Zi
AU - Giger, Maryellen
N1 - Publisher Copyright:
© The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
PY - 2023/11/1
Y1 - 2023/11/1
N2 - Purpose: To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups. Approach: Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development. Results: Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies. Conclusions: Our findings provide a valuable resource to researchers, clinicians, and the public at large.
AB - Purpose: To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups. Approach: Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development. Results: Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies. Conclusions: Our findings provide a valuable resource to researchers, clinicians, and the public at large.
KW - artificial intelligence
KW - bias
KW - fairness
KW - machine learning
UR - http://www.scopus.com/inward/record.url?scp=85164415386&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85164415386&partnerID=8YFLogxK
U2 - 10.1117/1.JMI.10.6.061104
DO - 10.1117/1.JMI.10.6.061104
M3 - Article
C2 - 37125409
AN - SCOPUS:85164415386
SN - 2329-4302
VL - 10
JO - Journal of Medical Imaging
JF - Journal of Medical Imaging
IS - 6
M1 - 061104
ER -