Unity Climber Environment
This project introduces an interface for synthesizing climbing movements. In addition, the purpose of the project is to provide a framework for trying out planning, optimization, supervised and reinforcement learning methods to synthesize climbing movements in unity.
Unity ML-Agents (Beta)
Unity Machine Learning Agents (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents. Agents can be trained using reinforcement learning, imitation learning, neuroevolution, or other machine learning methods through a simple-to-use Python API. We also provide implementations (based on TensorFlow) of state-of-the-art algorithms to enable game developers and hobbyists to easily train intelligent agents for 2D, 3D and VR/AR games. These trained agents can be used for multiple purposes, including controlling NPC behavior (in a variety of settings such as multi-agent and adversarial), automated testing of game builds and evaluating different game design decisions pre-release. ML-Agents is mutually beneficial for both game developers and AI researchers as it provides a central platform where advances in AI can be evaluated on Unity’s rich environments and then made accessible to the wider research and game developer communities.
- Unity environment control from Python
- Support for multiple environment configurations and training scenarios
- Train memory-enhanced Agents using deep reinforcement learning
- Easily definable Curriculum Learning scenarios
- Broadcasting of Agent behavior for supervised learning
- Built-in support for Imitation Learning
- Flexible Agent control with On Demand Decision Making
- Visualizing network outputs within the environment
- Simplified set-up with Docker (Experimental)
Documentation and References
For more information, in addition to installation and usage instructions, see our documentation home. If you have used a version of ML-Agents prior to v0.3, we strongly recommend our guide on migrating to v0.3.