• PRISM

  • Referenced in 417 articles [sw01186]
  • probabilistic models: discrete-time Markov chains, Markov decision processes and continuous-time Markov chains. Analysis ... checking engines: one symbolic, using BDDs (binary decision diagrams) and MTBDDs (multi-terminal BDDs...
  • MRMC

  • Referenced in 70 articles [sw04129]
  • bounded reachability analysis for continuous-time Markov decision processes (CTMDPs) and CSL model checking...
  • POMDPS

  • Referenced in 37 articles [sw03055]
  • planning algorithms for POMDPS. Partially Observable Markov Decision Processes (POMDPs) provide a rich framework...
  • POMDP

  • Referenced in 21 articles [sw03204]
  • Partially Observable Markov Decision Process (POMDP). The ’pomdp-solve’ program solves problems that are formulated ... partially observable Markov decision processes, a.k.a. POMDPs. It uses the basic dynamic programming approach...
  • LiQuor

  • Referenced in 18 articles [sw04136]
  • operational semantics based on (finite) Markov decision processes. LiQuor provides the facility to perform...
  • FAUST2

  • Referenced in 12 articles [sw23682]
  • possibly non-deterministic) discrete-time Markov processes (dtMP) defined over uncountable (continuous) state spaces ... finite-state Markov chain or Markov decision processes. The abstraction procedure runs in MATLAB...
  • Rapture

  • Referenced in 7 articles [sw13409]
  • Rapture: a tool for verifying Markov decision processes. We present a tool that performs verification ... quantified reachability properties over Markov decision processes (or probabilistic transition system). The originality...
  • FluCaP

  • Referenced in 10 articles [sw07748]
  • search algorithm for solving First-Order Markov Decision Processes (FOMDPs). Our approach combines first-order...
  • IPC-4

  • Referenced in 10 articles [sw03495]
  • approaches. One community consists of Markov decision process (MDP) researchers interested in developing algorithms that...
  • RRE

  • Referenced in 9 articles [sw22305]
  • solving a partially observable competitive Markov decision process that is automatically derived from attack-response...
  • QUASY

  • Referenced in 5 articles [sw09774]
  • instances of 2-player games and Markov Decision Processes (MDPs) with quantitative winning objectives. Quasy...
  • MDPtoolbox

  • Referenced in 2 articles [sw22705]
  • dynamic programming problems. R package MDPtoolbox: Markov Decision Processes Toolbox. The Markov Decision Processes ... resolution of discrete-time Markov Decision Processes: finite horizon, value iteration, policy iteration, linear programming...
  • DiPro

  • Referenced in 4 articles [sw09744]
  • continuous time Markov chains (CTMCs) and Markov decision processes (MDPs). The computed counterexamples...
  • DESPOT

  • Referenced in 4 articles [sw27455]
  • planning with regularization. The partially observable Markov decision process (POMDP) provides a principled general framework...
  • iscasMc

  • Referenced in 8 articles [sw36939]
  • interface for the evaluation of Markov chains and decision processes against PCTL and PCTL* specifications...
  • PyMDPtoolbox

  • Referenced in 2 articles [sw22706]
  • pymdptoolbox: Markov Decision Process (MDP) Toolbox. The MDP toolbox provides classes and functions ... resolution of descrete-time Markov Decision Processes. The list of algorithms that have been implemented...
  • mdp

  • Referenced in 3 articles [sw21994]
  • Markov Decision Process (MDP) Toolbox for Matlab. This toolbox supports value and policy iteration...
  • jMarkov

  • Referenced in 2 articles [sw22688]
  • Integrated Framework for Markov Chain Modeling. Markov chains (MC) are a powerful tool for modeling ... modeling of Quasi-Birth-and-Death processes, a class of MCs with infinite state space ... determine optimal decision rules based on Markov Decision Processes; and (iv) the jPhase module supports...
  • POMDPs.jl

  • Referenced in 2 articles [sw21537]
  • open-source framework for solving Markov decision processes (MDPs) and partially observable MDPs (POMDPs). POMDPs.jl...
  • REBA

  • Referenced in 2 articles [sw29435]
  • used to construct a partially observable Markov decision process (POMDP). The policy obtained by solving...