Getting Started with Unity ML-Agents 

This guide will help you set up the Unity ML-Agents Toolkit, run real examples of intelligent agents in Unity, and then train your own using machine learning.

We’ll walk through everything: downloading the project, setting up the Python environment, testing prebuilt agent behaviors, and finally training your own agent.

Step 1: Download the ML-Agents Toolkit

First, you'll get the Unity ML-Agents project files onto your computer.

Option A – Using Git (Recommended)

If you have Git installed, open your terminal or command prompt and run:


git clone --branch release_20 https://github.com/Unity-Technologies/ml-agents.git cd ml-agents

 Important Note: release_20 is a stable version compatible with Unity 2022 LTS.

Option B – Manual Download

If you don't have Git:

  1. Go to github.com/Unity-Technologies/ml-agents

  2. Click the Branch dropdown and select release_20 (Here is a direct link to Release 20: https://github.com/Unity-Technologies/ml-agents/tree/release_20_branch)

  3. Click Code → Download ZIP

  4. Extract the ZIP folder

  5. Open a terminal or file explorer inside the extracted ml-agents folder

 Step 2: Set Up the Python Environment (Using Provided Scripts)

Unity ML-Agents uses Python to train game agents. To avoid version issues, we’ll use a virtual environment.

For Windows Users

  1. Create a new file called setup_mlagents.bat inside the ml-agents folder

  2. Paste in this content:


@echo off echo Creating virtual environment... python -m venv venv echo Activating virtual environment... call venv\Scripts\activate echo Installing ML-Agents and PyTorch... pip install mlagents==1.1.0 torch --extra-index-url https://download.pytorch.org/whl/cu117 echo Optional: Install analysis tools (Jupyter, pandas, matplotlib)... pip install jupyter pandas matplotlib echo Environment setup complete. pause
  1. Right-click the .bat file and Run as Administrator

  2. When done, you’ll see confirmation that ML-Agents is installed




For macOS/Linux Users

  1. Create a file called setup_mlagents.sh in the ml-agents folder

  2. Paste in this content:


#!/bin/bash echo "Creating virtual environment..." python3 -m venv venv echo "Activating virtual environment..." source venv/bin/activate echo "Installing ML-Agents and PyTorch..." pip install mlagents==1.1.0 torch --extra-index-url https://download.pytorch.org/whl/cu117 echo "Optional: Installing analysis tools (Jupyter, pandas, matplotlib)..." pip install jupyter pandas matplotlib echo "Environment setup complete."
  1. Make it executable:


chmod +x setup_mlagents.sh
  1. Then run:


./setup_mlagents.sh



Step 3: Open the Unity Project

 After cloning or extracting the project, you’ll see something like this amongst other folders in your root directory:


ml-agents/ <-- This is the top-level project folder (WORK HERE) ├── Project/ <-- Unity project ├── config/ <-- Training configs (.yaml files) ├── docs/ <-- Documentation ├── ml-agents/ <-- Python package source code (NOT where you run setup) ├── results/ <-- Will be created after training ├── setup.py 

├── README.md

Once setup is complete, we’ll use Unity Hub to explore and run examples.

To Open the Project:

  1. Open Unity Hub

  2. Select Add Project from Disk 


  3. Navigate to the ml-agents/Project folder and select Add Project

  4. Make sure you select version 2022.x as your preferred version to open the project 

     
  5. Choose continue even if Unity warns that your project was last saved with a different version                                           

  6. Wait for Unity to load the project (may take 1–2 minutes)

  7. Ignore any package errors unless Unity asks for upgrades

Step 4: Explore and Understand the 3D Ball Environment

ML-Agents comes with several fully working examples.

Each scene features a different task, and the agent has already been trained using machine learning.

To Try an Example:

  1. In Unity, go to: Assets/ML-Agents/Examples/

  1. Choose one of these folders:

    • 3DBall

    • PushBlock

    • Crawler

    • Walker

    • FoodCollector

    • Soccer 

  2. Inside the folder, open the Scenes/ subfolder and double-click the .unity scene file.

  3. Once loaded, click Play in Unity to watch the agent in action.

Before you train your own agent, let’s explore a simple but powerful example: 3DBall.

This environment is designed to teach a basic agent how to balance a ball on a moving platform using machine learning.

What is the 3D Ball Scenario?

In the 3D Ball example:

  • Each Agent is a flat platform with a ball on top

  • The goal is to keep the ball balanced on the platform for as long as possible

  • The platform can tilt left/right and forward/backward using physics-based motion

  • The agent learns how to keep the ball centered—just like balancing something in your hand

This environment is simple enough to train quickly but illustrates how reinforcement learning works: the agent receives a small positive reward each step the ball stays balanced and a negative reward if it falls off.

To try out the 3D Ball:

  1. In Unity, go to:

    Assets/ML-Agents/Examples/3DBall/Scenes/
  2. Double-click to open: 3DBall.unity 

  3. In the Hierarchy, select the GameObject: Ball3DAgent

  4. In the Inspector, locate the Behavior Parameters component

    • Under Model, make sure it has a .onnx file assigned (e.g., 3DBall.onnx)

    • Under Behavior Type, set it to Inference Only . It is okay to also leave it as Default. 




  5. Press Play in Unity

You should now see platforms actively tilting to keep the balls balanced. This is the output of a previously trained ML model. 

What to Observe

  • Agents should balance the ball for a long time by continuously adjusting the platform angle

  • This behavior is not scripted manually—it was learned through trial-and-error training

  • If the ball falls quickly, it likely means a model isn’t loaded → check the .onnx file in the Behavior Parameters



Step 5: Train Your Own Agent

Now that you’ve seen what trained agents can do, you’re ready to run your own training session!

Before training your own agents, it's important to see what success looks like

What to Look For:

  • Does the agent balance, walk, or collect items intelligently?

  • Try modifying the environment (e.g. change gravity) and see how the agent reacts.

  • Open the Agent GameObject in the Hierarchy and look at its Behavior Parameters.

If no model is assigned, you can drag one in from TFModels/ or assign it in the Inspector.

Let’s start with the 3DBall environment, where the agent learns to balance a ball.


 Recommended Project Structure

your-project/ ├── venv/ # virtual environment ├── config/ # training configs (YAML files) ├── results/ # training logs & models └── unity-project/ # ML-Agents Unity project

To Train:

  1. Open a terminal and activate your environment:

Windows

call venv\Scripts\activate


macOS/Linux
source venv/bin/activate
  1. In the same terminal, start the training run:

mlagents-learn config/ppo/3DBall.yaml --run-id=My3DBallRun
  1. Wait until you see:


Start training by pressing the Play button in the Unity Editor.
  1. Go back to Unity and press Play in the Editor

The training will now begin. You’ll see mean reward values printed in the terminal.

Step 7: Use Your Trained Model in Unity

After training (you can stop it after a few minutes for testing), your trained model is saved here:

results/My3DBallRun/My3DBallRun.onnx

To Use the Trained Model:

  1. Drag the .onnx file into Unity’s folder:


    Assets/ML-Agents/Examples/3DBall/TFModels/
  2. In the Unity Hierarchy, select the Agent GameObject

  3. In the Behavior Parameters:

    • Model: Assign your new .onnx file

    • Behavior Type: Set to Inference Only

  4. Press Play in Unity and watch your trained agent balance the ball 

 Step 8: Explore and Experiment

Try training agents in different scenes now that your setup is working:

Scene

Behavior Learned

PushBlock

Push a cube to a goal

Crawler

Learn to walk on many legs

Walker

Learn to walk upright (bipedal)

FoodCollector

Collect food while avoiding enemies

SoccerTwos

2v2 soccer agents in teams

Each scene has a config file in config/ppo/. Replace the YAML file in the mlagents-learn command to train those.

Deactivating the Environment

When you’re done for the day:

Windows


deactivate

macOS/Linux


deactivate


Troubleshooting

Problem

What to Check

mlagents-learn not found

Did you activate the environment first?

Agent won’t move in Unity

Add a Decision Requester component to the agent

Crash or "Communicator exited"

Are you using Rigidbody.AddForce() not position?

Index errors in C# script

Match action size with Behavior Parameters

All set!

You now have a full Unity ML + Python toolchain working:

  • You’ve tested working examples

  • You’ve trained an agent from scratch

  • You’ve deployed your own .onnx model into Unity

Continue exploring, experiment with different setups, and start building your own intelligent games!