KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs. Symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage — on average over 90% per tool (median: over 94%) — and significantly beat the coverage of the developers’ own hand-written test suites. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100% coverage on 31 of them. We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to cross-check purportedly identical BUSY-BOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies.

References in zbMATH (referenced in 17 articles )

Showing results 1 to 17 of 17.
Sorted by year (citations)

  1. Garg, Pranav; Löding, Christof; Madhusudan, P.; Neider, Daniel: Quantified data automata for linear data structures: a register automaton model with applications to learning invariants of programs manipulating arrays and lists (2015)
  2. Chong, Nathan; Donaldson, Alastair F.; Ketema, Jeroen: A sound and complete abstraction for reasoning about parallel prefix sums (2014)
  3. Groce, Alex; Havelund, Klaus; Holzmann, Gerard; Joshi, Rajeev; Xu, Ru-Gang: Establishing flight software reliability: testing, model checking, constraint-solving, monitoring and learning (2014)
  4. Romano, Anthony: Practical floating-point tests with integer code (2014)
  5. Botinčan, Matko; Babić, Domagoj: Sigma$^*$, symbolic learning of input-output specifications (2013)
  6. Chattopadhyay, Sudipta; Roychoudhury, Abhik: Scalable and precise refinement of cache timing analysis via path-sensitive verification (2013)
  7. Palikareva, Hristina; Cadar, Cristian: Multi-solver support in symbolic execution (2013)
  8. Kim, Moonzoo; Kim, Yunho; Choi, Yunja: Concolic testing of the multi-sector Read operation for flash storage platform software (2012)
  9. Holzer, Andreas; Tautschnig, Michael; Schallhart, Christian; Veith, Helmut: An introduction to test specification in FQL (2011)
  10. Obdržálek, Jan; Trtík, Marek: Efficient loop navigation for symbolic execution (2011)
  11. Siegel, Stephen F.; Zirkel, Timothy K.: TASS: the toolkit for accurate scientific software (2011)
  12. Siegel, Stephen F.; Zirkel, Timothy K.: Collective assertions (2011)
  13. Siegel, Stephen F.; Zirkel, Timothy K.: FEVS: a functional equivalence verification suite for high-performance scientific computing (2011)
  14. Angeletti, Damiano; Giunchiglia, Enrico; Narizzano, Massimo; Puddu, Alessandra; Sabina, Salvatore: Using bounded model checking for coverage analysis of safety-critical software in an industrial setting (2010)
  15. Kim, Yunho; Kim, Moonzoo; Dang, Nam: Scalable distributed concolic testing: A case study on a flash storage platform (2010)
  16. Sherman, Elena; Garvin, Brady J.; Dwyer, Matthew B.: A slice-based decision procedure for type-based partial orders (2010)
  17. Kim, Moonzoo; Kim, Yunho: Concolic testing of the multi-sector read operation for flash memory file system (2009)