TY - JOUR
T1 - Toward fairness in artificial intelligence for medical image analysis
T2 - Identification and mitigation of potential biases in the roadmap from data collection to model deployment
AU - Drukker, Karen
AU - Chen, Weijie
AU - Gichoya, Judy
AU - Gruszauskas, Nicholas
AU - Kalpathy-Cramer, Jayashree
AU - Koyejo, Sanmi
AU - Myers, Kyle
AU - Sá, Rui C.
AU - Sahiner, Berkman
AU - Whitney, Heather
AU - Zhang, Zi
AU - Giger, Maryellen
N1 - The research reported is part of MIDRC and was made possible by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) of the National Institutes of Health (Contract Nos. 75N92020C00008 and 75N92020C00021). R.C.S. is supported by NIH through the Data and Technology Advancement (DATA) National Service Scholar program.
The mention of commercial products, their sources, or their use in connection with material reported herein is not to be construed as either an actual or implied endorsement of such products by the Department of Health and Human Services. K.D. receives royalties from Hologic. J.G. has received funding support from the US National Science Foundation (Grant No. 1928481) from the Division of Electrical, Communication and Cyber Systems. J.K.-C. has no funding to report for this manuscript but funding for other work unrelated to what is presented here includes a research grant from GE, research support from Genentech, Consultant/stock options from Siloam Vision, LLC and technology licensed to Boston AI. S.K. has received funding support (Award Nos. NSF IIS 2205329 and NSF IIS 2046795). K.M. works as an independent technical and regulatory consultant as principal for Puente Solutions LLC. Other co-authors have no conflicts of interest. M.L.G. is a stockholder in R2 technology/Hologic and QView, receives royalties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi, and Toshiba and was a cofounder in Quantitative Insights (now consultant to Qlarity Imaging). It is the University of Chicago Conflict of Interest Policy that investigators disclose publicly actual or potential significant financial interest that would reasonably appear to be directly and significantly affected by the research activities. Acknowledgments
PY - 2023/11/1
Y1 - 2023/11/1
N2 - Purpose: To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups. Approach: Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development. Results: Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies. Conclusions: Our findings provide a valuable resource to researchers, clinicians, and the public at large.
AB - Purpose: To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups. Approach: Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development. Results: Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies. Conclusions: Our findings provide a valuable resource to researchers, clinicians, and the public at large.
KW - artificial intelligence
KW - bias
KW - fairness
KW - machine learning
UR - http://www.scopus.com/inward/record.url?scp=85164415386&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85164415386&partnerID=8YFLogxK
U2 - 10.1117/1.JMI.10.6.061104
DO - 10.1117/1.JMI.10.6.061104
M3 - Article
C2 - 37125409
AN - SCOPUS:85164415386
SN - 2329-4302
VL - 10
JO - Journal of Medical Imaging
JF - Journal of Medical Imaging
IS - 6
M1 - 061104
ER -