TY - GEN
T1 - DeepMutation
T2 - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018
AU - Ma, Lei
AU - Zhang, Fuyuan
AU - Sun, Jiyuan
AU - Xue, Minhui
AU - Li, Bo
AU - Juefei-Xu, Felix
AU - Xie, Chao
AU - Li, Li
AU - Liu, Yang
AU - Zhao, Jianjun
AU - Wang, Yadong
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/11/16
Y1 - 2018/11/16
N2 - Deep learning (DL) defines a new data-driven programming paradigm where the internal system logic is largely shaped by the training data. The standard way of evaluating DL models is to examine their performance on a test dataset. The quality of the test dataset is of great importance to gain confidence of the trained models. Using an inadequate test dataset, DL models that have achieved high test accuracy may still lack generality and robustness. In traditional software testing, mutation testing is a well-established technique for quality evaluation of test suites, which analyzes to what extent a test suite detects the injected faults. However, due to the fundamental difference between traditional software and deep learning-based software, traditional mutation testing techniques cannot be directly applied to DL systems. In this paper, we propose a mutation testing framework specialized for DL systems to measure the quality of test data. To do this, by sharing the same spirit of mutation testing in traditional software, we first define a set of source-level mutation operators to inject faults to the source of DL (i.e., training data and training programs). Then we design a set of model-level mutation operators that directly inject faults into DL models without a training process. Eventually, the quality of test data could be evaluated from the analysis on to what extent the injected faults could be detected. The usefulness of the proposed mutation testing techniques is demonstrated on two public datasets, namely MNIST and CIFAR-10, with three DL models.
AB - Deep learning (DL) defines a new data-driven programming paradigm where the internal system logic is largely shaped by the training data. The standard way of evaluating DL models is to examine their performance on a test dataset. The quality of the test dataset is of great importance to gain confidence of the trained models. Using an inadequate test dataset, DL models that have achieved high test accuracy may still lack generality and robustness. In traditional software testing, mutation testing is a well-established technique for quality evaluation of test suites, which analyzes to what extent a test suite detects the injected faults. However, due to the fundamental difference between traditional software and deep learning-based software, traditional mutation testing techniques cannot be directly applied to DL systems. In this paper, we propose a mutation testing framework specialized for DL systems to measure the quality of test data. To do this, by sharing the same spirit of mutation testing in traditional software, we first define a set of source-level mutation operators to inject faults to the source of DL (i.e., training data and training programs). Then we design a set of model-level mutation operators that directly inject faults into DL models without a training process. Eventually, the quality of test data could be evaluated from the analysis on to what extent the injected faults could be detected. The usefulness of the proposed mutation testing techniques is demonstrated on two public datasets, namely MNIST and CIFAR-10, with three DL models.
KW - Deep learning, Software testing, Deep neural networks, Mutation testing
UR - http://www.scopus.com/inward/record.url?scp=85056557793&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85056557793&partnerID=8YFLogxK
U2 - 10.1109/ISSRE.2018.00021
DO - 10.1109/ISSRE.2018.00021
M3 - Conference contribution
AN - SCOPUS:85056557793
T3 - Proceedings - International Symposium on Software Reliability Engineering, ISSRE
SP - 100
EP - 111
BT - Proceedings - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018
A2 - Ghosh, Sudipto
A2 - Cukic, Bojan
A2 - Poston, Robin
A2 - Natella, Roberto
A2 - Laranjeiro, Nuno
PB - IEEE Computer Society
Y2 - 15 October 2018 through 18 October 2018
ER -