Eclat: Automatic Generation and Classification of Test Inputs This paper describes a technique that selects, from a large set of test inputs, a small subset likely to reveal faults in the software under test. The technique takes a program or software component, plus a set of correct executions — say, from observations of the software running properly, or from an existing test suite that a user wishes to enhance. The technique first infers an operational model of the software’s operation. Then, inputs whose operational pattern of execution differs from the model in specific ways are suggestive of faults. These inputs are further reduced by selecting only one input per operational pattern. The result is a small portion of the original inputs, deemed by the technique as most likely to reveal faults. Thus, the technique can also be seen as an error-detection technique. The paper describes two additional techniques that complement test input selection. One is a technique for automatically producing an oracle (a set of assertions) for a test input from the operational model, thus transforming the test input into a test case. The other is a classification-guided test input generation technique that also makes use of operational models and patterns. When generating inputs, it filters out code sequences that are unlikely to contribute to legal inputs, improving the efficiency of its search for fault-revealing inputs. We have implemented these techniques in the Eclat tool, which generates unit tests for Java classes. Eclat’s input is a set of classes to test and an example program execution—say, a passing test suite. Eclat’s output is a set of JUnit test cases, each containing a potentially fault-revealing input and a set of assertions at least one of which fails. In our experiments, Eclat successfully generated inputs that exposed fault-revealing behavior; we have used Eclat to reveal real errors in programs. The inputs it selects as fault-revealing are an order of magnitude as likely to reveal a fault as all generated inputs.
Keywords for this software
References in zbMATH (referenced in 8 articles )
Showing results 1 to 8 of 8.
- Krishnamoorthy, Saparya; Hsiao, Michael S.; Lingappan, Loganathan: Strategies for scalable symbolic execution-driven test generation for programs (2011)
- Hao, Dan; Xie, Tao; Zhang, Lu; Wang, Xiaoyin; Sun, Jiasu; Mei, Hong: Test input reduction for result inspection to facilitate fault localization (2010)
- Kuperberg, Michael; Omri, Fouad: Using heuristics to automate parameter generation for benchmarking of Java methods (2009)
- Pasternak, Benny; Tyszberowicz, Shmuel; Yehudai, Amiram: GenUTest: a unit test and mock aspect generation tool (2009)
- Zybin, R.S.; Kuliamin, V.V.; Ponomarenko, A.V.; Rubanov, V.V.; Chernov, E.S.: Automation of broad sanity test generation (2008)
- Ernst, Michael D.; Perkins, Jeff H.; Guo, Philip J.; McCamant, Stephen; Pacheco, Carlos; Tschantz, Matthew S.; Xiao, Chen: The Daikon system for dynamic detection of likely invariants (2007)
- Simons, Anthony J.H.: Jwalk: a tool for lazy, systematic testing of Java classes by design introspection and user interaction. (2007)
- Simons, Anthony J.H.: Jwalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction (2007)