Catalyst.RL: A Distributed Framework for Reproducible RL Research. Despite the recent progress in deep reinforcement learning field (RL), and, arguably because of it, a large body of work remains to be done in reproducing and carefully comparing different RL algorithms. We present catalyst.RL, an open source framework for RL research with a focus on reproducibility and flexibility. Main features of our library include large-scale asynchronous distributed training, easy-to-use configuration files with the complete list of hyperparameters for the particular experiments, efficient implementations of various RL algorithms and auxiliary tricks, such as frame stacking, n-step returns, value distributions, etc. To vindicate the usefulness of our framework, we evaluate it on a range of benchmarks in a continuous control, as well as on the task of developing a controller to enable a physiologically-based human model with a prosthetic leg to walk and run. The latter task was introduced at NeurIPS 2018 AI for Prosthetics Challenge, where our team took the 3rd place, capitalizing on the ability of catalyst.RL to train high-quality and sample-efficient RL agents.
Keywords for this software
References in zbMATH (referenced in 3 articles , 1 standard article )
Showing results 1 to 3 of 3.
- Fujita, Yasuhiro; Nagarajan, Prabhat; Kataoka, Toshiki; Ishikawa, Takahiro: ChainerRL: a deep reinforcement learning library (2021)
- Sergey Kolesnikov, Oleksii Hrinchuk: Catalyst.RL: A Distributed Framework for Reproducible RL Research (2019) arXiv
- Yasuhiro Fujita, Toshiki Kataoka, Prabhat Nagarajan, Takahiro Ishikawa: ChainerRL: A Deep Reinforcement Learning Library (2019) arXiv