TY - JOUR
T1 - Explainable, interpretable, and trustworthy AI for an intelligent digital twin
T2 - A case study on remaining useful life
AU - Kobayashi, Kazuma
AU - Alam, Syed Bahauddin
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2024/3
Y1 - 2024/3
N2 - Artificial intelligence (AI) and Machine learning (ML) are increasingly used for digital twin development in energy and engineering systems, but these models must be fair, unbiased, interpretable, and explainable. It is critical to have confidence in AI's trustworthiness. ML techniques have been useful in predicting important parameters and improving model performance. However, for these AI techniques to be useful in making decisions, they need to be audited, accounted for, and easy to understand. Therefore, the use of explainable AI (XAI) and interpretable machine learning (IML) is crucial for the accurate prediction of prognostics, such as remaining useful life (RUL), in a digital twin system to make it intelligent while ensuring that the AI model is transparent in its decision-making processes and that the predictions it generates can be understood and trusted by users. By using an explainable, interpretable, and trustworthy AI, intelligent digital twin systems can make more accurate predictions of RUL, leading to better maintenance and repair planning and, ultimately, improved system performance. This paper aims to explain the ideas of XAI and IML and justify the important role of AI/ML for the digital twin components, which requires XAI to understand the prediction better. This paper explains the importance and fundamentals of XAI and IML in both local and global aspects in terms of feature selection, model interpretability, and model diagnosis and validation to ensure the reliable use of trustworthy AI/ML applications for RUL prediction.
AB - Artificial intelligence (AI) and Machine learning (ML) are increasingly used for digital twin development in energy and engineering systems, but these models must be fair, unbiased, interpretable, and explainable. It is critical to have confidence in AI's trustworthiness. ML techniques have been useful in predicting important parameters and improving model performance. However, for these AI techniques to be useful in making decisions, they need to be audited, accounted for, and easy to understand. Therefore, the use of explainable AI (XAI) and interpretable machine learning (IML) is crucial for the accurate prediction of prognostics, such as remaining useful life (RUL), in a digital twin system to make it intelligent while ensuring that the AI model is transparent in its decision-making processes and that the predictions it generates can be understood and trusted by users. By using an explainable, interpretable, and trustworthy AI, intelligent digital twin systems can make more accurate predictions of RUL, leading to better maintenance and repair planning and, ultimately, improved system performance. This paper aims to explain the ideas of XAI and IML and justify the important role of AI/ML for the digital twin components, which requires XAI to understand the prediction better. This paper explains the importance and fundamentals of XAI and IML in both local and global aspects in terms of feature selection, model interpretability, and model diagnosis and validation to ensure the reliable use of trustworthy AI/ML applications for RUL prediction.
KW - Digital twin
KW - Explainable AI
KW - Interpretable AI
KW - Remaining useful life
UR - http://www.scopus.com/inward/record.url?scp=85179130772&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85179130772&partnerID=8YFLogxK
U2 - 10.1016/j.engappai.2023.107620
DO - 10.1016/j.engappai.2023.107620
M3 - Article
AN - SCOPUS:85179130772
SN - 0952-1976
VL - 129
JO - Engineering Applications of Artificial Intelligence
JF - Engineering Applications of Artificial Intelligence
M1 - 107620
ER -