TY - GEN
T1 - Interpretable On-The-Fly Repair of Deep Neural Classifiers
AU - Mohasel Arjomandi, Hossein
AU - Jabbarvand, Reyhaneh
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/12/4
Y1 - 2023/12/4
N2 - Deep neural networks (DNNs) are vital in safety-critical systems but remain imperfect, leading to misclassification post-deployment. Prior works either make the model abstain from predicting in uncertain cases and so reduce its overall accuracy, or suffer from being uninterpretable. To overcome the limitations of prior work, we propose an interpretable approach to repair misclassifications after model deployment, instead of discarding them, by reducing the multi-classification problem into a simple binary classification. Our proposed technique specifically targets the predictions that the model is uncertain about them, extracts the training data that is positively and negatively incorporated into those uncertain decisions, and uses them to repair the cases where uncertainty leads to misclassification. We evaluate our approach on MNIST. The preliminary results show that our technique can repair 10.7% of the misclassifications on average, improving the performance of the models, and motivating the applicability of on-The-fly repair for more complex classifiers and different modalities.
AB - Deep neural networks (DNNs) are vital in safety-critical systems but remain imperfect, leading to misclassification post-deployment. Prior works either make the model abstain from predicting in uncertain cases and so reduce its overall accuracy, or suffer from being uninterpretable. To overcome the limitations of prior work, we propose an interpretable approach to repair misclassifications after model deployment, instead of discarding them, by reducing the multi-classification problem into a simple binary classification. Our proposed technique specifically targets the predictions that the model is uncertain about them, extracts the training data that is positively and negatively incorporated into those uncertain decisions, and uses them to repair the cases where uncertainty leads to misclassification. We evaluate our approach on MNIST. The preliminary results show that our technique can repair 10.7% of the misclassifications on average, improving the performance of the models, and motivating the applicability of on-The-fly repair for more complex classifiers and different modalities.
KW - Safe machine learning
KW - Safety critical systems
KW - Uncertainty
UR - http://www.scopus.com/inward/record.url?scp=85180414308&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85180414308&partnerID=8YFLogxK
U2 - 10.1145/3617574.3617860
DO - 10.1145/3617574.3617860
M3 - Conference contribution
AN - SCOPUS:85180414308
T3 - SE4SafeML 2023 - Proceedings of the 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, Co-located with: ESEC/FSE 2023
SP - 14
EP - 17
BT - SE4SafeML 2023 - Proceedings of the 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, Co-located with
A2 - Chechik, Marsha
A2 - Elbaum, Sebastian
A2 - Hu, Boyue Caroline
A2 - Marsso, Lina
A2 - von Stein, Meriel
PB - Association for Computing Machinery
T2 - 1st International Workshop on Dependability and Trustworthiness of Safety-Critical Systems with Machine Learned Components, SE4SafeML 2023. Co-located with: ESEC/FSE 2023
Y2 - 4 December 2023
ER -