Combining test case generation and runtime verification

Cyrille Artho, Howard Barringer, Allen Goldberg, Klaus Havelund, Sarfraz Khurshid, Mike Lowry, Corina Pasareanu, Grigore Roşu, Koushik Sen, Willem Visser, Rich Washington

Research output: Contribution to journalArticlepeer-review


Software testing is typically an ad hoc process where human testers manually write test inputs and descriptions of expected test results, perhaps automating their execution in a regression suite. This process is cumbersome and costly. This paper reports results on a framework to further automate this process. The framework consists of combining automated test case generation based on systematically exploring the input domain of the program with runtime verification, where execution traces are monitored and verified against properties expressed in temporal logic. Capabilities also exist for analyzing traces for concurrency errors, such as deadlocks and data races. The input domain of the program is explored using a model checker extended with symbolic execution. Properties are formulated in an expressive temporal logic. A methodology is advocated that automatically generates properties specific to each input rather than formulating properties uniformly true for all inputs. The paper describes an application of the technology to a NASA rover controller.

Original languageEnglish (US)
Pages (from-to)209-234
Number of pages26
JournalTheoretical Computer Science
Issue number2
StatePublished - 2005


  • Automated testing
  • Concurrency analysis
  • Model checking
  • NASA rover controller
  • Runtime verification
  • Symbolic execution
  • Temporal logic
  • Test case generation

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)


Dive into the research topics of 'Combining test case generation and runtime verification'. Together they form a unique fingerprint.

Cite this