MRMC

The ins and outs of the probabilistic model checker MRMC. The Markov Reward Model Checker (MRMC) is a software tool for verifying properties over probabilistic models. It supports PCTL and CSL model checking, and their reward extensions. Distinguishing features of MRMC are its support for computing time- and reward-bounded reachability probabilities, (property-driven) bisimulation minimization, and precise on-the-fly steady-state detection. Recent tool features include time-bounded reachability analysis for continuous-time Markov decision processes (CTMDPs) and CSL model checking by discrete-event simulation. This paper presents the tool’s current status and its implementation details.


References in zbMATH (referenced in 64 articles )

Showing results 1 to 20 of 64.
Sorted by year (citations)

1 2 3 4 next

  1. Bakir, Mehmet Emin; Gheorghe, Marian; Konur, Savas; Stannett, Mike: Comparative analysis of statistical model checking tools (2017)
  2. Bohy, Aaron; Bruyère, Véronique; Raskin, Jean-François; Bertrand, Nathalie: Symblicit algorithms for mean-payoff and shortest path in monotonic Markov decision processes (2017)
  3. Esmaeil Zadeh Soudjani, Sadegh; Abate, Alessandro; Majumdar, Rupak: Dynamic Bayesian networks for formal verification of structured stochastic processes (2017)
  4. Feng, Yuan; Zhang, Lijun: Precisely deciding CSL formulas through approximate model checking for CTMCs (2017)
  5. Junges, Sebastian; Guck, Dennis; Katoen, Joost-Pieter; Rensink, Arend; Stoelinga, Mariëlle: Fault trees on a diet: automated reduction by graph rewriting (2017)
  6. Ahmed, Waqar; Hasan, Osman; Tahar, Sofiène: Formal dependability modeling and analysis: a survey (2016)
  7. Bortolussi, Luca; Gast, Nicolas: Mean-field limits beyond ordinary differential equations (2016)
  8. Bortolussi, Luca; Milios, Dimitrios; Sanguinetti, Guido: Smoothed model checking for uncertain continuous-time Markov chains (2016)
  9. Fearnley, John; Rabe, Markus N.; Schewe, Sven; Zhang, Lijun: Efficient approximation of optimal control for continuous-time Markov games (2016)
  10. Hermanns, Holger; Krčál, Jan; Vester, Steen: Distributed synthesis in continuous time (2016)
  11. Vandin, Andrea; Tribastone, Mirco: Quantitative abstractions for collective adaptive systems (2016)
  12. Xu, Ming; Zhang, Lijun; Jansen, David N.; Zhu, Huibiao; Yang, Zongyuan: Multiphase until formulas over Markov reward models: an algebraic approach (2016)
  13. Abate, Alessandro; Soudjani, Sadegh Esmaeil Zadeh: Quantitative approximation of the probability distribution of a Markov process by formal abstractions (2015)
  14. Baier, Christel; Daum, Marcus; Engel, Benjamin; Härtig, Hermann; Klein, Joachim; Klüppelholz, Sascha; Märcker, Steffen; Tews, Hendrik; Völp, Marcus: Locks: picking key methods for a scalable quantitative analysis (2015)
  15. Biscaia, M.; Henriques, D.; Mateus, P.: Decidability of approximate Skolem problem and applications to logical verification of dynamical properties of Markov chains (2015)
  16. Dannenberg, Frits; Hahn, Ernst Moritz; Kwiatkowska, Marta: Computing cumulative rewards using fast adaptive uniformization (2015)
  17. Noll, Thomas: Safety, dependability and performance analysis of aerospace systems (2015) ioport
  18. Aarts, Fides; Kuppens, Harco; Tretmans, Jan; Vaandrager, Frits; Verwer, Sicco: Improving active Mealy machine learning for protocol conformance testing (2014)
  19. Ábrahám, Erika; Becker, Bernd; Dehnert, Christian; Jansen, Nils; Katoen, Joost-Pieter; Wimmer, Ralf: Counterexample generation for discrete-time Markov models: an introductory survey (2014)
  20. Amparore, Elvio Gilberto; Beccuti, Marco; Donatelli, Susanna: (Stochastic) model checking in greatspn (2014)

1 2 3 4 next


Further publications can be found at: http://mrmc-tool.org/trac/wiki/Bibliography