MuJoCo stands for Multi-Joint dynamics with Contact. It is being developed by Emo Todorov for Roboti LLC. Initially it was used at the Movement Control Laboratory, University of Washington, and has now been adopted by a wide community of researchers and developers. MuJoCo is a physics engine aiming to facilitate research and development in robotics, biomechanics, graphics and animation, and other areas where fast and accurate simulation is needed. It offers a unique combination of speed, accuracy and modeling power, yet it is not merely a better simulator. Instead it is the first full-featured simulator designed from the ground up for the purpose of model-based optimization, and in particular optimization through contacts. MuJoCo makes it possible to scale up computationally-intensive techniques such optimal control, physically-consistent state estimation, system identification and automated mechanism design, and apply them to complex dynamical systems in contact-rich behaviors. It also has more traditional applications such as testing and validation of control schemes before deployment on physical robots, interactive scientific visualization, virtual environments, animation and gaming. Its key features are: ..

References in zbMATH (referenced in 19 articles )

Showing results 1 to 19 of 19.
Sorted by year (citations)

  1. Arvind U. Raghunathan, Devesh K. Jha, Diego Romeres: PYROBOCOP : Python-based Robotic Control & Optimization Package for Manipulation and Collision Avoidance (2021) arXiv
  2. Cao, Yongcan; Zhan, Huixin: Efficient multi-objective reinforcement learning via multiple-gradient descent with iteratively discovered weight-vector sets (2021)
  3. Hsu, Shao-Chen; Tadiparthi, Vaishnav; Bhattacharya, Raktim: A Lagrangian method for constrained dynamics in tensegrity systems with compressible bars (2021)
  4. Iwamoto, Masami; Kato, Daichi: Efficient actor-critic reinforcement learning with embodiment of muscle tone for posture stabilization of the human arm (2021)
  5. Ohnishi, Motoya; Notomista, Gennaro; Sugiyama, Masashi; Egerstedt, Magnus: Constraint learning for control tasks with limited duration barrier functions (2021)
  6. Bougie, Nicolas; Ichise, Ryutaro: Skill-based curiosity for intrinsically motivated reinforcement learning (2020)
  7. Ciosek, Kamil; Whiteson, Shimon: Expected policy gradients for reinforcement learning (2020)
  8. Jonas Rothfuss, Dennis Lee, Ignasi Clavera, Tamim Asfour, Pieter Abbeel: ProMP: Proximal Meta-Policy Search (2020) arXiv
  9. Lazaridis, Aristotelis; Fachantidis, Anestis; Vlahavas, Ioannis: Deep reinforcement learning: a state-of-the-art walkthrough (2020)
  10. Fan Fei, Zhan Tu, Yilun Yang, Jian Zhang, Xinyan Deng: Flappy Hummingbird: An Open Source Dynamic Simulation of Flapping Wing Robots and Animals (2019) arXiv
  11. Parisi, Simone; Tangkaratt, Voot; Peters, Jan; Khan, Mohammad Emtiyaz: TD-regularized actor-critic methods (2019)
  12. Yasuhiro Fujita, Toshiki Kataoka, Prabhat Nagarajan, Takahiro Ishikawa: ChainerRL: A Deep Reinforcement Learning Library (2019) arXiv
  13. Aggarwal, Charu C.: Neural networks and deep learning. A textbook (2018)
  14. Ueltzhöffer, Kai: Deep active inference (2018)
  15. Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jordan, Ion Stoica: Ray: A Distributed Framework for Emerging AI Applications (2017) arXiv
  16. Wirth, Christian; Akrour, Riad; Neumann, Gerhard; Fürnkranz, Johannes: A survey of preference-based reinforcement learning methods (2017)
  17. Zobova, Alexandra A.; Habra, Timothée; Van der Noot, Nicolas; Dallali, Houman; Tsagarakis, Nikolaos G.; Fisette, Paul; Ronsse, Renaud: Multi-physics modelling of a compliant humanoid robot (2017)
  18. Levine, Sergey; Finn, Chelsea; Darrell, Trevor; Abbeel, Pieter: End-to-end training of deep visuomotor policies (2016)
  19. Weichao Qiu, Alan Yuille: UnrealCV: Connecting Computer Vision to Unreal Engine (2016) arXiv