TY - JOUR
T1 - Interpretable Machine Learning for Discovery
T2 - Statistical Challenges and Opportunities
AU - Allen, Genevera I.
AU - Gan, Luqin
AU - Zheng, Lili
N1 - The authors gratefully acknowledge support by National Science Foundation (NSF) NeuroNex-1707400, National Institutes of Health (NIH) 1R01GM140468, and NSF DMS-2210837.
PY - 2024/4/22
Y1 - 2024/4/22
N2 - New technologies have led to vast troves of large and complex data sets across many scientific domains and industries. People routinely use machine learning techniques not only to process, visualize, and make predictions from these big data, but also to make data-driven discoveries. These discoveries are often made using interpretable machine learning, or machine learning models and techniques that yield human-understandable insights. In this article, we discuss and review the field of interpretable machine learning, focusing especially on the techniques, as they are often employed to generate new knowledge or make discoveries from large data sets. We outline the types of discoveries that can be made using interpretable machine learning in both supervised and unsupervised settings. Additionally, we focus on the grand challenge of how to validate these discoveries in a data-driven manner, which promotes trust in machine learning systems and reproducibility in science. We discuss validation both from a practical perspective, reviewing approaches based on data-splitting and stability, as well as from a theoretical perspective, reviewing statistical results on model selection consistency and uncertainty quantification via statistical inference. Finally, we conclude byhighlighting open challenges in using interpretable machine learning techniques to make discoveries, including gaps between theory and practice for validating data-driven discoveries.
AB - New technologies have led to vast troves of large and complex data sets across many scientific domains and industries. People routinely use machine learning techniques not only to process, visualize, and make predictions from these big data, but also to make data-driven discoveries. These discoveries are often made using interpretable machine learning, or machine learning models and techniques that yield human-understandable insights. In this article, we discuss and review the field of interpretable machine learning, focusing especially on the techniques, as they are often employed to generate new knowledge or make discoveries from large data sets. We outline the types of discoveries that can be made using interpretable machine learning in both supervised and unsupervised settings. Additionally, we focus on the grand challenge of how to validate these discoveries in a data-driven manner, which promotes trust in machine learning systems and reproducibility in science. We discuss validation both from a practical perspective, reviewing approaches based on data-splitting and stability, as well as from a theoretical perspective, reviewing statistical results on model selection consistency and uncertainty quantification via statistical inference. Finally, we conclude byhighlighting open challenges in using interpretable machine learning techniques to make discoveries, including gaps between theory and practice for validating data-driven discoveries.
KW - data-driven discoveries
KW - explainability
KW - interpretability
KW - machine learning
KW - selection consistency
KW - stability
KW - uncertainty quantification
KW - validation
UR - http://www.scopus.com/inward/record.url?scp=85187716279&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85187716279&partnerID=8YFLogxK
U2 - 10.1146/annurev-statistics-040120-030919
DO - 10.1146/annurev-statistics-040120-030919
M3 - Review article
AN - SCOPUS:85187716279
SN - 2326-8298
VL - 11
SP - 97
EP - 121
JO - Annual Review of Statistics and Its Application
JF - Annual Review of Statistics and Its Application
IS - 1
ER -