Approxrl

ApproxRL: A Matlab Toolbox for Approximate RL and DP. This toolbox contains Matlab implementations of a number of approximate reinforcement learning (RL) and dynamic programming (DP) algorithms. Notably, it contains the algorithms used in the numerical examples from the book: L. Busoniu, R. Babuska, B. De Schutter, and D. Ernst, Reinforcement Learning and Dynamic Programming Using Function Approximators, CRC Press, Automation and Control Engineering Series. April 2010, 280 pages, ISBN 978-1439821084.


References in zbMATH (referenced in 20 articles )

Showing results 1 to 20 of 20.
Sorted by year (citations)

  1. Vamvoudakis, Kyriakos G.; Ferraz, Henrique: Model-free event-triggered control algorithm for continuous-time linear systems with optimal performance (2018)
  2. Vamvoudakis, Kyriakos G.: Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach (2017)
  3. Vamvoudakis, Kyriakos G.; Mojoodi, Arman; Ferraz, Henrique: Event-triggered optimal tracking control of nonlinear systems (2017)
  4. Panfili, Martina; Pietrabissa, Antonio; Oddi, Guido; Suraci, Vincenzo: A lexicographic approach to constrained MDP admission control (2016)
  5. Tutsoy, Onder: Design and comparison base analysis of adaptive estimator for completely unknown linear systems in the presence of OE noise and constant input time delay (2016)
  6. Tutsoy, Onder; Brown, Martin: Chaotic dynamics and convergence analysis of temporal difference algorithms with bang-bang control (2016)
  7. Geramifard, Alborz; Dann, Christoph; Klein, Robert H.; Dabney, William; How, Jonathan P.: RLPy: a value-function-based reinforcement learning framework for education and research (2015) ioport
  8. Vamvoudakis, Kyriakos G.: Non-zero sum Nash Q-learning for unknown deterministic continuous-time linear systems (2015)
  9. Gaggero, Mauro; Gnecco, Giorgio; Sanguineti, Marcello: Approximate dynamic programming for stochastic $N$-stage optimization with application to optimal consumption under uncertainty (2014)
  10. Jung, Tobias; Wehenkel, Louis; Ernst, Damien; Maes, Francis: Optimized look-ahead tree policies: a bridge between look-ahead tree policies and direct policy search (2014)
  11. Laber, Eric B.; Lizotte, Daniel J.; Qian, Min; Pelham, William E.; Murphy, Susan A.: Dynamic treatment regimes: technical challenges and applications (2014)
  12. Lian, Chuanqiang; Xu, Xin; Zuo, Lei; Huang, Zhenhua: Adaptive critic design with graph Laplacian for online learning control of nonlinear systems (2014)
  13. Xu, Xin; Zuo, Lei; Huang, Zhenhua: Reinforcement learning algorithms with function approximation: recent advances and applications (2014)
  14. Fonteneau, Raphael; Murphy, Susan A.; Wehenkel, Louis; Ernst, Damien: Batch mode reinforcement learning based on the synthesis of artificial trajectories (2013)
  15. Jiang, Zhong-Ping; Jiang, Yu: Robust adaptive dynamic programming for linear and nonlinear systems: an overview (2013)
  16. Peters, Markus; Ketter, Wolfgang; Saar-Tsechansky, Maytal; Collins, John: A reinforcement learning approach to autonomous decision-making in smart electricity markets (2013) ioport
  17. Beck, C.L.; Srikant, R.: Error bounds for constant step-size $Q$-learning (2012)
  18. Xu, Hao; Jagannathan, S.; Lewis, F.L.: Stochastic optimal control of unknown linear networked control system in the presence of random delays and packet losses (2012)
  19. Bertsekas, Dimitri P.: Approximate policy iteration: a survey and some new methods (2011)
  20. Powell, Warren B.; Ma, Jun: A review of stochastic algorithms with continuous value function approximation and some new approximate policy iteration algorithms for multidimensional continuous applications (2011)