Experience report: How is dynamic symbolic execution different from manual testing? A study on KLEE

Xiaoyin Wang, Lingming Zhang, Philip Tanofsky

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Software testing has been the major approach to software quality assurance for decades, but it typically involves intensive manual efforts. To reduce manual efforts, researchers have proposed numerous approaches to automate test-case generation, which is one of the most time-consuming tasks in software testing. One most recent achievement in the area is Dynamic Symbolic Execution (DSE), and tools based on DSE, such as KLEE, have been reported to generate test suites achieving higher code coverage than manually developed test suites. However, besides the competitive code coverage, there have been few studies to compare DSE-based test suites with manually developed test suites more thoroughly on various metrics to understand the detailed differences between the two testing methodologies. In this paper, we revisit the experimental study on the KLEE tool and GNU CoreUtils programs, and compare KLEE-based test suites with manually developed test suites on various aspects. We further carried out a qualitative study to investigates the reasons behind the differences in statistical results. The results of our studies show that while KLEE-based test suites are able to generate test cases with higher code coverage, they are relatively less effective on covering hard-to-cover code and killing mutants. Furthermore, our qualitative study reveals that KLEEbased test suites have advantages in exploring error-handling code and exhausting options, but are less effective on generating valid string inputs and exploring meaningful program behaviors.

Original languageEnglish (US)
Title of host publication2015 International Symposium on Software Testing and Analysis, ISSTA 2015 - Proceedings
PublisherAssociation for Computing Machinery
Pages199-210
Number of pages12
ISBN (Electronic)9781450336208
DOIs
StatePublished - Jul 13 2015
Externally publishedYes
Event24th International Symposium on Software Testing and Analysis, ISSTA 2015 - Baltimore, United States
Duration: Jul 13 2015Jul 17 2015

Publication series

Name2015 International Symposium on Software Testing and Analysis, ISSTA 2015 - Proceedings

Other

Other24th International Symposium on Software Testing and Analysis, ISSTA 2015
Country/TerritoryUnited States
CityBaltimore
Period7/13/157/17/15

Keywords

  • Dynamic symbolic execution
  • Empirical study
  • Manual testing

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Experience report: How is dynamic symbolic execution different from manual testing? A study on KLEE'. Together they form a unique fingerprint.

Cite this