VCWC: a versioning competition workflow compiler. System competitions evaluate solvers and compare state-of-the-art implementations on benchmark sets in a dedicated and controlled computing environment, usually comprising of multiple machines. Recent initiatives such as [6] aim at establishing best practices in computer science evaluations, especially identifying measures to be taken for ensuring repeatability, excluding common pitfalls, and introducing appropriate tools. For instance, Asparagus [1] focusses on maintaining benchmarks and instances thereof. Other known tools such as Runlim (url{}) and Runsolver [12] help to limit resources and measure CPU time and memory usage of solver runs. Other systems are tailored at specific needs of specific communities: the not publicly accessible ASP competition evaluation platform for the 3rd ASP competition 2011 [4] implements a framework for running a ASP competition. Another more general platform is StarExec [13], which aims at providing a generic framework for competition maintainers. The last two systems are similar in spirit, but each have restrictions that reduce the possibility of general usage: the StarExec platform does not provide support for generic solver input and has no scripting support, while the ASP competition evaluation platform has no support for fault-tolerant execution of instance runs. Moreover, benchmark statistics and ranking can only be computed after all solver runs for all benchmark instances have been completed.