LibPGRL: A high performance reinforcement learning library in C++. The PG library was intended to be a high-performance policy-gradient reinforcement learning library. Since the first version it has been extended to a number of value based RL algorithms, so the name is only historical. It is now a general RL library which implements, for example, natural actor critic, and least squares policy iteration. It has been designed with large distributed RL systems in mind. It’s not perfect, but it is pretty fast. API documentation and examples are provided. What libpg does NOT provide is model based planning algorithms such as value iteration, or real-time dynamic programming, or exact policy gradient. There is limited support for belief state tracking in the simulators/Cassandra/ directory (named because we use the POMDP file format created by Anthony Cassandra). One day I’d like to extend it to these situations, but that will require some uptake of the library.
References in zbMATH (referenced in 1 article )
Showing result 1 of 1.
- Geramifard, Alborz; Dann, Christoph; Klein, Robert H.; Dabney, William; How, Jonathan P.: RLPy: a value-function-based reinforcement learning framework for education and research (2015)