MRMC
The ins and outs of the probabilistic model checker MRMC. The Markov Reward Model Checker (MRMC) is a software tool for verifying properties over probabilistic models. It supports PCTL and CSL model checking, and their reward extensions. Distinguishing features of MRMC are its support for computing time- and reward-bounded reachability probabilities, (property-driven) bisimulation minimization, and precise on-the-fly steady-state detection. Recent tool features include time-bounded reachability analysis for continuous-time Markov decision processes (CTMDPs) and CSL model checking by discrete-event simulation. This paper presents the tool’s current status and its implementation details.
Keywords for this software
References in zbMATH (referenced in 66 articles )
Showing results 1 to 20 of 66.
Sorted by year (- Bozzano, Marco; Cimatti, Alessandro; Mattarei, Cristian: Formal reliability analysis of redundancy architectures (2019)
- Bakir, Mehmet Emin; Gheorghe, Marian; Konur, Savas; Stannett, Mike: Comparative analysis of statistical model checking tools (2017)
- Bohy, Aaron; Bruyère, Véronique; Raskin, Jean-François; Bertrand, Nathalie: Symblicit algorithms for mean-payoff and shortest path in monotonic Markov decision processes (2017)
- Esmaeil Zadeh Soudjani, Sadegh; Abate, Alessandro; Majumdar, Rupak: Dynamic Bayesian networks for formal verification of structured stochastic processes (2017)
- Feng, Yuan; Zhang, Lijun: Precisely deciding CSL formulas through approximate model checking for CTMCs (2017)
- Hölzl, Johannes: Markov chains and Markov decision processes in Isabelle/HOL (2017)
- Junges, Sebastian; Guck, Dennis; Katoen, Joost-Pieter; Rensink, Arend; Stoelinga, Mariëlle: Fault trees on a diet: automated reduction by graph rewriting (2017)
- Ahmed, Waqar; Hasan, Osman; Tahar, Sofiène: Formal dependability modeling and analysis: a survey (2016)
- Bortolussi, Luca; Gast, Nicolas: Mean-field limits beyond ordinary differential equations (2016)
- Bortolussi, Luca; Milios, Dimitrios; Sanguinetti, Guido: Smoothed model checking for uncertain continuous-time Markov chains (2016)
- Fearnley, John; Rabe, Markus N.; Schewe, Sven; Zhang, Lijun: Efficient approximation of optimal control for continuous-time Markov games (2016)
- Hermanns, Holger; Krčál, Jan; Vester, Steen: Distributed synthesis in continuous time (2016)
- Vandin, Andrea; Tribastone, Mirco: Quantitative abstractions for collective adaptive systems (2016)
- Xu, Ming; Zhang, Lijun; Jansen, David N.; Zhu, Huibiao; Yang, Zongyuan: Multiphase until formulas over Markov reward models: an algebraic approach (2016)
- Abate, Alessandro; Soudjani, Sadegh Esmaeil Zadeh: Quantitative approximation of the probability distribution of a Markov process by formal abstractions (2015)
- Baier, Christel; Daum, Marcus; Engel, Benjamin; Härtig, Hermann; Klein, Joachim; Klüppelholz, Sascha; Märcker, Steffen; Tews, Hendrik; Völp, Marcus: Locks: picking key methods for a scalable quantitative analysis (2015) ioport
- Biscaia, M.; Henriques, D.; Mateus, P.: Decidability of approximate Skolem problem and applications to logical verification of dynamical properties of Markov chains (2015)
- Dannenberg, Frits; Hahn, Ernst Moritz; Kwiatkowska, Marta: Computing cumulative rewards using fast adaptive uniformization (2015)
- Noll, Thomas: Safety, dependability and performance analysis of aerospace systems (2015) ioport
- Aarts, Fides; Kuppens, Harco; Tretmans, Jan; Vaandrager, Frits; Verwer, Sicco: Improving active Mealy machine learning for protocol conformance testing (2014)
Further publications can be found at: http://mrmc-tool.org/trac/wiki/Bibliography