DART: directed automated random testing. We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging.

This software is also peer reviewed by journal TOMS.

References in zbMATH (referenced in 74 articles )

Showing results 41 to 60 of 74.
Sorted by year (citations)
  1. Bué, Pierre-Christophe; Julliand, Jacques; Masson, Pierre-Alain: Association of under-approximation techniques for generating tests from models (2011)
  2. Giannakopoulou, Dimitra; Bushnell, David H.; Schumann, Johann; Erzberger, Heinz; Heere, Karen: Formal testing for separation assurance (2011)
  3. Héam, Pierre-Cyrille; Masson, Catherine: A random testing approach using pushdown automata (2011)
  4. Holzer, Andreas; Tautschnig, Michael; Schallhart, Christian; Veith, Helmut: An introduction to test specification in FQL (2011)
  5. Hooimeijer, Pieter; Veanes, Margus: An evaluation of automata algorithms for string analysis (2011)
  6. Katelman, Michael; Meseguer, José: \textttvlogsl: a strategy language for simulation-based verification of hardware (2011)
  7. Kim, Moonzoo; Kim, Yunho: Automated analysis of industrial embedded software (2011) ioport
  8. Krishnamoorthy, Saparya; Hsiao, Michael S.; Lingappan, Loganathan: Strategies for scalable symbolic execution-driven test generation for programs (2011)
  9. Obdržálek, Jan; Trtík, Marek: Efficient loop navigation for symbolic execution (2011)
  10. Sen, Koushik: DART: directed automated random testing (2011) ioport
  11. Tschannen, Julian; Furia, Carlo A.; Nordio, Martin; Meyer, Bertrand: Usable verification of object-oriented programs by combining static and dynamic techniques (2011) ioport
  12. Angeletti, Damiano; Giunchiglia, Enrico; Narizzano, Massimo; Puddu, Alessandra; Sabina, Salvatore: Using bounded model checking for coverage analysis of safety-critical software in an industrial setting (2010) ioport
  13. Godefroid, Patrice; Nori, Aditya V.; Rajamani, Sriram K.; Tetali, Sai Deep: Compositional may-must program analysis: unleashing the power of alternation (2010)
  14. Kim, Yunho; Kim, Moonzoo; Dang, Nam: Scalable distributed concolic testing: A case study on a flash storage platform (2010) ioport
  15. Anand, Saswat; Păsăreanu, Corina S.; Visser, Willem: Symbolic execution with abstraction (2009) ioport
  16. Bjørner, Nikolaj; Tillmann, Nikolai; Voronkov, Andrei: Path feasibility analysis for string-manipulating programs (2009)
  17. Helmstetter, C.; Maraninchi, F.; Maillet-Contoz, L.: Full simulation coverage for SystemC transaction-level models of systems-on-a-chip (2009)
  18. Jéron, Thierry: Symbolic model-based test selection (2009)
  19. Kuliamin, V. V.: Integration of verification methods for program systems (2009)
  20. Păsăreanu, Corina S.; Visser, Willem: A survey of new trends in symbolic execution for software testing and analysis (2009) ioport