Unity ML-Agents โ€” Part III: Training

This blog post is the third in a series covering getting started with the ML-Agents Unity toolkit. I am following Code Monkey’s YouTube tutorial, and these posts roughly follow that video. The ML-Agents GitHub repository also includes example projects and code to help you get started: https://github.com/Unity-Technologies/ml-agents/

For steps on how to set up ML-Agents and resource links, check out my Part I post. For steps on getting started with ML-Agents and setting up your project, check out my Part II post.

1. Simple test

Starting where we left off last time, we can now start to train our AI Agent. Let’s start by adding a Debug line in our Agent script to print out actions as they occur:

    Debug.Log(action.DiscreteActions[0]);       // for discrete (int) or
    Debug.Log(action.ContinuousActions[0]);     // for continuous (float)

In the Unity editor, make sure your Agent’s Behavior Type is set to “Default”.

To start training, go to your terminal (from inside your virtual environment), and use the command mlagents-learn. You can specify a new run ID, or you can use the --resume flag to resume an old ID. Or, you can use the --force flag to overwrite previous data:

    $ mlagents-learn --run-id=Test1 --force

The terminal should now direct you to go back to the Unity Editor and hit “Play” to run your game. Once you do, you should see your Agent start jiggling around the screen. Learning!

2. Contain your bot

At this point, I realized why the examples include Wall objects (instead of just an Enemy object like I had in my Part II post). If you don’t contain your Agent within a specified area, they can aimlessly jiggle around forever. I made sure to add some wall elements to my scene before I continued training. ๐Ÿ™‚

3. Prevent possum bot

What if our Agent learns to avoid Walls and Enemies, but becomes overly risk-averse in the process and decides to never move anywhere at all?

To force our Agent into some type of action, and to ensure each episode ends and doesn’t run forever, we can set the “Max Step” in our Agent’s properties. In your Agent’s Properties, under “Script”, set the “Max Step” to 1000.

4. Better smarter faster (Kage Bunshin no Jutsu)

How can we make our Agent learn more, faster?

A simple way to speed up our Agent’s learning is to create more Agents. It’ll be just like in Naruto, when Naruto is able to quickly level up and gain new skills by creating a ton of shadow clones of himself that can all train simultaneously.

Make clones

First, make your entire environment into a prefab. Create a new empty object and name it something like “Environment”. Select all the objects involved in your scene โ€” I have my Agent JimBand, my Target Microfilm, my Enemy Shark, 4 Walls, and a Ground โ€” and move them into your Environment. Now create a prefab from this Environment object. Now that you have an Environment prefab, plop a bunch of them into your scene, so that they can all train simultaneously.

Update your script!

Important: Now that you have a bunch of duplicate environments, you will need to update your script to use local positions instead of global positions. Replace calls to transform.position with transform.localPosition, for example:

    public override void CollectObservations(VectorSensor sensor)
    {
        sensor.AddObservation(transform.localPosition);
        sensor.AddObservation(microfilmTransform.localPosition);
    }

You’ll probably also need to adjust your camera position if you want to see all your Agents training at once.

Get smart

Run the training for a while until your Agents get smart. It’s helpful and fun to include a visual for when each Agent “wins” or “loses”. I changed the background color for each. Here’s what that looked like in training (Agent is purple, Target is pink, Enemy is blue; note that the background color represents the result of the previous attempt):

5. How to use your brain

Once your Agents are good and smart, go ahead and stop the game, and your neural network model (the “brain”) will be saved to a .onnx file. The terminal output will let you know where to find this file and what its name is:

    [INFO] Exported results/TestParameters/GetMicrofilm/GetMicrofilm-240818.onnx

To be able to use this neural network model, copy/move the .onnx file into your project’s Assets directory. I created a new folder called “NNModels”.

In the Unity Editor, temporarily disable all your copy Environments. You can disable an object by un-checking the box next to its name in its properties, or you can use the keyboard shortcut Alt-Shift-A, which will allow you to easily disable (or activate) multiple items at once.

In your original Environment, select your original Agent, and in its properties, assign your new neural network brain to it as its “Model”.

You can leave “Behavior Type” as “Default”, or you can explicitly set it to “Inference only”.

From here, you can simply hit Play/Run and watch your Agent use its new brain to solve its mission.

6. Environment Parameters

The hyperparameters for training your Agent are specified in a configuration file that you can pass to the mlagents-learn program. To have finer control over your training, create a folder in your Assets folder named “config”, and create a new .yaml file inside it. The name of this file will need to match the name of your Agent’s “Behavior Name”.

The ML-Agents GitHub repository includes an example config .yaml you can use: https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Learning-Environment-Create-New.md#training-the-environment. Here’s what that example .yaml config looks like:

behaviors:
  RollerBall:
    trainer_type: ppo
    hyperparameters:
      batch_size: 10
      buffer_size: 100
      learning_rate: 3.0e-4
      beta: 5.0e-4
      epsilon: 0.2
      lambd: 0.99
      num_epoch: 3
      learning_rate_schedule: linear
      beta_schedule: constant
      epsilon_schedule: linear
    network_settings:
      normalize: false
      hidden_units: 128
      num_layers: 2
    reward_signals:
      extrinsic:
        gamma: 0.99
        strength: 1.0
    max_steps: 500000
    time_horizon: 64
    summary_freq: 10000

The “beta_schedule” and “epsilon_schedule” parameters in this example gave me errors, so I removed those two lines from my config file for now.

Now that you have this config file set up, you can call mlagents-learn like this:

    $ mlagents-learn  ./Assets/config/GetMicrofilm.yaml --run-id=TestParameters

One of my current to-do items is to get familiar with all of these environment parameters, what they do and how to smartly twiddle them.

7. Get more smart with more random

Our AI Agent JimBand is now trained and can complete his given mission. But, he’s not very smart. If we move the Microfilm or his enemy the Shark and send him off with his current NN brain, poor JimBand will most likely do very poorly. This is because he has only learned to find the target and avoid obstacles as they are in their current, static positions. ๐Ÿ™

To help our Agent out, we can introduce randomness into our training. Let’s start the Agent and his Target at a new random position each time a new episode begins.

I also had fun experimenting with moving my Shark object around randomly, which added an extra level of challenge for JimBand. I ended up having to set a rule to keep the Shark and the Microfilm a sufficient distance apart, to avoid creating a scenario that was unsolvable. Even with all three characters’ positions randomized, the Agent was able to learn to find the Target while avoiding the Enemy, which I found pretty impressive.

Here’s my training in action, after adding randomness (Agent is purple, Target is pink, Enemy is blue, background color represents result of previous episode):

8. Observe your training progress

In order to observe the training process in detail, you can use TensorBoard, which will graph progress made as your train your AI. From within your virtual environment, while your training running, run this command from a new terminal to view TensorBoard:

    $ tensorboard --logdir results

Then navigate to localhost:6006 in your browser to view your training stats. In the TensorBoard graphs, you should see Reward increasing (as your AI gets the goal) and Episode length decreasing (as your AI gets the goal faster).

Have fun making some smart bots!

       __==`==__
     {|  o L o  |}
     ,|  '''''  |,
   /'.|=========|.'\
  / / |.. ___ ..| \ \
 (/\) |  |   |  | (/\)
      |___\  |___\

10-21-21

Modulo Magica

In my spare time I’ve been playing around with MagicaVoxel. Today I was experimenting with using modulo operations to stagger the cells of a model over multiple layers. I originally thought this would be useful for exporting .obj models that could then be imported elsewhere as independent cells. It wasn’t! But it was still fun playing with numbers and colorful cubes.

Here’s the original layers I was working with:

Screengrab of MagicaVoxel editor showing several solid voxel layers across the y-axis, displayed in a colorful gradient of pinks and yellows.

The colors are from a map I was using to generate the original model. I output these values as colors to help me debug my code. Also it’s pretty.

Here’s a screengrab of what it looked like midway:

Screengrab of MagicaVoxel editor showing voxel layers spread across the y-axis, where each layer contains cells with the same x and z coordinants as its neighbors, displayed in a colorful gradient of pinks and yellows.

My math was off at this point, so the cells aren’t staggered correctly.

And here’s another screengrab of what it looked like once I got the cell-staggering to work:

Screengrab of MagicaVoxel editor showing voxel layers spread across the y-axis, where each layer contains cells with x and z coordinants distinct from its neighbors, displayed in a colorful gradient of pinks and yellows.

Each original layer is now blasted out evenly across multiple layers.

Go check out MagicaVoxel โ†’ https://ephtracy.github.io/

10-13-21

Unity ML-Agents โ€” Part II: Getting Started

This blog post is the second in a series covering getting started with the ML-Agents Unity toolkit. I am following Code Monkey’s YouTube tutorial, and these posts roughly follow that video.

For steps on how to set up ML-Agents and resource links, check out my Part I post.

The general idea

Using ML-Agents in your Unity project, you will create an “Agent” that will follow this pattern:

  1. Observation
  2. Decision
  3. Action
  4. Reward

(Repeat)

The rewards can be either positive or negative (penalties) and can be weighted however you choose.

The ML-Agents GitHub repository includes example projects to help get you started: https://github.com/Unity-Technologies/ml-agents/

Following Code Monkey’s YouTube tutorial, we will create a game with an Agent object and a “goal”/”target” object. We can also add “enemy” objects that the agent should avoid. When the Agent triggers the “target” they will be positively rewarded, but when they trigger an “enemy” they will be penalized.

1. Create your “Target” game object

Open up your Unity project (set-up with the required packages, like we did in Part I). Create a game object to act as your “target” / “goal”. I added a sphere to my scene and named it “Microfilm”. Add both a “Rigidbody” component and a “Physics Box Collider” component to your Target game object, and in the collider settings, mark the object as a trigger.

2. Create your “Enemy” game object(s)

Now create a game object to act as your “enemy” / thing you want the Agent to avoid. I added a cube to my scene and named it “Shark”. Like we did with the object, add both a “Rigidbody” component and a “Physics Box Collider” component to your Enemy game object, and mark the object as a trigger.

3. Create your “Agent” game object

Now create a game object to be your Agent. For this I added another cube to my scene and named him “JimBand”. Like before, add both a “Rigidbody” component and a “Physics Box Collider” component to your Agent game object, and mark the object as a trigger. We’ll come back to our Agent Jim Band in a moment after we get some other stuff set up…

4. Make a script for your Agent

We need some code to tell Agent Jim Band what to do, so create a new C# script in your project. I’ll call my script “JimBandAgent.cs”.

Inherit from the Agent class

Instead of inheriting from the MonoBehavior class, we need our JimBandAgent class to inherit from the Agent class:

    public class JimBandAgent : Agent
    {
        // ...
    }

Include ML-Agents

Our JimBandAgent class also needs access to the ML-Agents toolkit, so at the top of your script include:

    using Unity.MLAgents;

To follow along with the Code Monkey’s tutorial, I also needed to directly include the Actuators and Sensors classes:

    using Unity.MLAgents.Actuators;
    using Unity.MLAgents.Sensors;

Override Agent methods

We’ll need to override some of the built-in Agent methods to be able to control Agent Jim Band exactly how we want. Include these the `OnActionReceived` and `CollectObservations` methods in your script, making sure to use the “override” modifier:

    public override void CollectObservations(VectorSensor sensor)
    {
        ;
    }

    public override void OnActionReceived(ActionBuffers actions)
    {
        ;
    }

We’ll come back to these in a bit.

5. Back to your Agent game object…

Link the script

Link your new Agent script to your Agent game object by adding a script component and selecting your script.

Set Behavior Parameters

Now we’ll set the Agent object’s Behavior Parameters. In the Agent object’s Properties menu, there should now be a “Behavior Parameters” section. Give the behavior a name, such as “GetMicrofilm”.

Under “Vector Action” or “Actions”, there are some options for how our actions will be represented. There are two space types to choose from: “Discrete” and “Continuous”. “Discrete” means integers and “Continuous” means floating point.

For either continuous or discrete, you can set the number of actions. For continuous actions, this field might be labeled “Continuous Actions” or “Space Size”, and for discrete actions it might be labeled “Discrete Branches” or “Branches Size”. This will be the number of available actions (and likewise the size of the Action Buffer array holding those actions).

For discrete actions, you can also set a “Branch Size” for each individual branch. This is the number of options for each branch (action). For example, you could choose to have 2 actions represented by 2 discrete branches, with one action of size 2 and the other of size 3. The first action/branch might represent “Accelerate” and “Brake”, and the second action/branch represents “Left”, “Right”, and “Forward” (example taken from Code Monkey tutorial).

For this demo, we’ll select Continuous Actions of size 2, representing the x and z axes.

Add a Decision Requester

Add a Decision Requester component to your Agent game object, which is listed under “Components” > “ML Agents”. This will request a decision on regular intervals, which will allow the Agent to then make actions.

At this point we can run a test training. Since I am going to include info on training in my next post, I am going to skip this for now, but check out the Code Monkey video tutorial for more info [17:21].

6. Observations (inputs)

Back to your Agent script…

In your Agent script, add a reference to the Target’s position and a reference to the Enemy’s position:

    public Transform microfilmTransform;
    public Transform sharkTransform;

Make sure to link the Target object and Enemy object to your script’s Transform reference variables. To do this, go into the Unity Editor, go to the script section of your Agent object, and link the correct objects to each variable.

Now add inputs for your Target and Agent to your `CollectObservations` method:

    public override void CollectObservations(VectorSensor sensor)
    {
        sensor.AddObservation(transform.position);  // pass in agent's current position
        sensor.AddObservation(microfilmTransform.position); // pass in target's position
    }

Since our input will be 2 positions (that of JimBand and that of his Microfilm target), each represented by 3 values (x, y, z), we will have 6 input values to observe. Next we have to add these to our Agent’s Behavior Parameters.

And back to your Agent object…

In your Agent game object’s Behavior Parameters, add the correct “Space Size” for “Vector Observation”. We have 6 input values to observe, so our Vector Observation Space Size is 6. The “Stacked Vectors” parameter in this same section is how many observations you want an Agent to make before a decision — it allows your AI to have memory. Cool!

7. Actions

In the script

Now we’ll start overriding the `OnActionReceived` method. This is where you’ll add actions. Agent JimBand will be moving around searching for the Microfilm, so we’ll set a speed he can move at, and move him to his new position using the x and z coordinates from the Actions Buffer:

    public override void OnActionReceived(ActionBuffers actions)
    {
        float moveSpeed = 2f;
        float moveX = actions.ContinuousActions[0];
        float moveZ = actions.ContinuousActions[1];
        transform.position += new Vector3(moveX, 0, moveZ) * Time.deltaTime * moveSpeed;
    }

8. Rewards and penalties

Still in the script…

Let’s add a method to handle trigger events. Here’s what we want our trigger events to do:

  • If Agent triggers Microfilm, give positive reward and reset game.
  • If Agent triggers Shark, give penalty and reset game.

In our trigger handling method, we can handle rewards with built-in method `AddReward`, which increments a reward, or built-in method `SetReward`, which sets a specific reward. Here’s what my method to handle trigger events looks like:

    private void OnTriggerEnter(Collider other)
    {
        if (other.TryGetComponent(out Microfilm microfilm))
        {
            SetReward(1f);
            EndEpisode():
        }
        if (other.TryGetComponent(out Shark shark))
        {
            SetReward(-1f);
            EndEpisode();
        }
    }

We need to also override the `OnEpisodeBegin` method to reset the state of the game and move Jim back his starting position:

    public override void OnEpisodeBegin()
    {
        transform.position = Vector3.zero;
    }

9. Test it out

In order to test out our code before training our Agent, we can override the method `Heuristic`. This will allow us to control the actions passed to the `OnActionReceived` method. In your Agent script include:

    public override void Heuristic(in ActionBuffers actionsOut)
    {
        ActionSegment continuousActions = actionsOut.ContinuousActions;
        continuousActions[0] = Input.GetAxisRaw("Horizontal");
        continuousActions[1] = Input.GetAxisRaw("Vertical");
    }

To use your Heuristic override method for testing, in your Agent object’s Behavior Parameters, set “Behavior Type” to “Heuristic Only” (“Default” will also work if there is no ML model in use).

Now when you run your game, your input (up/down/left/right) will control Agent JimBand. Test if your inputs and triggers are working correctly. When you drive Jim into either the Shark or the Microfilm, he should be reset to his starting position. Use `Debug.Log` to print any debugging output you need to the terminal

In the next segment I’ll go over training our ML Agent, so he can drive himself into the Shark and Microfilm, and hopefully into the Microfilm more than into the Shark, all on his own. ๐Ÿ™‚

    |\____/|
    | @  @ |
>-oo| '''' |oo-<
    |______|
    |_/  \_|
    ^^    ^^

10-13-21

Unity ML-Agents โ€” Part I: Set-up

This blog post will be the first of several as I get started with ML-Agents, a Unity toolkit for adding deep learning technology to your game dev projects. I am following Code Monkey’s YouTube tutorial, and these posts will roughly follow that video, with added notes for anything extra I ran into along the way.

I am working on a ThinkPad running Ubuntu 20.04.3, relying on an integrated GPU (Mesa Intelยฎ UHD Graphics 620 (WHL GT2)). I am currently using Unity Hub 2.4.5, Unity Editor 2020.3.18f1, Python 3.8.10, pip as my Python package manager, and venv for my virtual environment.

Resources for setting up ML-Agents

1. Create a Unity project

If you don’t already have one started, create a new Unity project.

2. Create a Python virtual environment

Next, create a Python virtual environment in your Unity project directory.

Virtual environments allow you to create an isolated environment for each of your Python projects, so that each project can have its own dependencies independent from every other project. If one project requires NumPY 1.20.1 and another requires NumPy 1.19.5, creating a virtual environment for each project is an easy way to make sure everyone’s NumPy needs are happily met.

To get venv to work on Ubuntu, I first had to install it using apt:

$ sudo apt install python3.8-venv

To create a new venv virtual environment, use the command venv :

$ python3 -m venv MyVenv

To activate/enter your venv, use the command activate. On my computer this looks like:

$ source MyVenv/bin/activate

Once you are inside your active virtual environment, you can deactivate it /exit it using the command deactivate:

$ deactivate

3. Install packages (in venv)

Update pip

Upgrade pip inside the venv virtual environment you created:

$ python3 -m pip install --upgrade pip

Upgrade setuptools:

$ python3 -m pip install --upgrade setuptools

Update CUDA if you have it

If you have a Nvidia GPU and will be using CUDA, you can install/update CUDA. Note that you may need an older version of CUDA to work with your PyTorch /ML-Agents /Unity set-up.

The Code Monkey ML-Agents YouTube tutorial provides some info on setting up with CUDA, and here are a couple links for more info:

Install PyTorch

The PyTorch website provides a helpful tool to select your set-up preferences and determine the command you’ll need to install PyTorch. I selected the stable 1.9.1 version* of PyTorch, Linux OS, Pip package manager, Python language, and CPU compute platform, and it provided me with this command:

$ python3 -m pip install torch==1.9.1+cpu torchvision==0.10.1+cpu torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html

*It turned out I actually needed a lower version of PyTorch to work with ML-Agents, which I discovered once I tried to install ML-Agents (below). Here’s the command that ended up working for me, to install PyTorch 1.8.2:

$ python -m pip install torch==1.8.2+cpu torchvision==0.9.2+cpu torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html

Install ML-Agents

Install ML-Agents in your Python virtual environment:

$ python3 -m pip install mlagents

At this point, with PyTorch 1.9.1, I received this error message during installation:

    Attempting uninstall: torch
    Found existing installation: torch 1.9.1+cpu
    Uninstalling torch-1.9.1+cpu:
    Successfully uninstalled torch-1.9.1+cpu
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behavior is the source of the following dependency conflicts.
    torchvision 0.10.1+cpu requires torch==1.9.1, but you have torch 1.8.1 which is incompatible.
    torchaudio 0.9.1 requires torch==1.9.1, but you have torch 1.8.1 which is incompatible.
    Successfully installed absl-py-0.14.1 attrs-21.2.0 cachetools-4.2.4 cattrs-1.5.0 certifi-2021.5.30 charset-normalizer-2.0.6 cloudpickle-2.0.0 google-auth-1.35.0 google-auth-oauthlib-0.4.6 grpcio-1.41.0 h5py-3.4.0 idna-3.2 markdown-3.3.4 mlagents-0.27.0 mlagents-envs-0.27.0 oauthlib-3.1.1 protobuf-3.18.1 pyasn1-0.4.8 pyasn1-modules-0.2.8 pyyaml-5.4.1 requests-2.26.0 requests-oauthlib-1.3.0 rsa-4.7.2 six-1.16.0 tensorboard-2.6.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.0 torch-1.8.1 urllib3-1.26.7 werkzeug-2.0.2 wheel-0.37.0

Ok, so it uninstalled my brand new PyTorch 1.9.1, replaced it with PyTorch 1.8.1, and then complained that it needs 1.9.1… ๐Ÿ˜›

Reinstalling PyTorch using 1.8.2 seemed to resolve the dependency issues for me:

$ python -m pip install torch==1.8.2+cpu torchvision==0.9.2+cpu torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html

Code Monkey also recommends installing ML-Agents using the 2020-resolver feature if you run into issues with package version requirements:

$ python3 -m pip install mlagents --use-feature=2020-resolver

Verify that it worked

Now! You can verify that you have Ml-Agents installed in your virtual environment by testing out the mlagents-learn command to read to help doc text:

mlagents-learn --help

That should hopefully now display the help text without any warnings or errors ๐Ÿ˜€

4. Install ML-Agents, but now in Unity

Ok. Now we can go into the Unity Editor and install the packages we need there.

To install the ML-Agents package in your Unity project, open up your project in the Unity Editor. For me this means first opening up Unity Hub, and then opening my project in the Editor from there.

Now that you’re in the Unity Editor, in your Unity project, go to “Window” and click on “Package Manager”. This is where you can manage, add, and remove Packages for your project. Selecting “Packages: In Project” will show the packages in your project, and selecting “Packages: Unity Registry” will show you all available packages.

To add the ML-Agents package to your project, select “Packages: Unity Registry”, find “ML Agent” and install the version you want. By default the packages that appear in the Registry are the latest stable versions (“Verified Packages”), but the Unity ML-Agents installation guide recommends enabling “Preview Packages” in order to use the latest release, as long as you’re not in a production stage. To enable preview packages, from the same Package Manager window click the gear icon to open Advanced Project Settings and check the box that says “Enable Preview Packages.”

To verify that you have ML-Agents installed correctly in your Unity project, create a new empty game object and see if you can add an ML-Agents Component to it. In the object’s Properties (either in the Inspector or right-click on “Properties”), there should now be an ML Agents option when you click “Add Component”. ๐Ÿ™‚

5. Install ML-Agents Extension, in Unity

At this point I needed to install the ML-Agents Extension package separately to continue following along the Code Monkey tutorial, since it is not included with the regular ML-Agents Unity package.

ML-Agents Extension installation resources:

To add the ML-Agents Extension package, you can go through either the Unity Package Manager, or you can add it directly to the manifest.json file in the Packages directory in your Unity project.

For either method you can use either a Git URL or a local path. Here’s steps for each of those 4 options:

Adding a package โ€” Unity Package Manager using Git URL

To add a new package from the Unity Package Manager, click the “+” button, which will give you options on how to add a new package. To add a package from a GitHub URL, click “Add package from git URL”, and paste in the URL to the repository you need.

The Git URL you will need will look like this, with the “git+” at the front, and the “#release_XX” at the end:

"git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_18"

Adding a package โ€” Unity Package Manager using local package

You can also add a package from the Unity Package Manager using local source files. To locally install the ML-Agents Extensions package, first clone the ml-agents repository (https://github.com/Unity-Technologies/ml-agents) to somewhere that makes sense to you on your computer.

To make sure you have the correct version you need, check the version list in the README (https://github.com/Unity-Technologies/ml-agents#releases–documentation). You can either clone the specific branch you need:

$ git clone --branch release_18 https://github.com/Unity-Technologies/ml-agents.git

Or, you can clone the main branch, and later checkout/switch to the branch you need:

$ git clone https://github.com/Unity-Technologies/ml-agents.git
$ cd ml-agents
$ git checkout 

Once you have your package repo set up locally, select “Add package from disk” in the Unity Package Manager. For the ML-Agents Extension package, you will navigate to the com.unity.ml-agents.extensions folder. From there select the package.json file.

Adding a package โ€” Add Git URL to package manifest.json

You can also add a new package by directly editing your Project Packages manifest.json file. To add the ML-Agents Extension package using a Git URL, you will need to insert a line that looks like this:

"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_18"

Adding a package โ€” Add local path to package manifest.json

If you are using local source files to add your ML-Agents Extensions package, the line you add to your Project Packages manifest.json fill will instead need to include the path to your local com.unity.ml-agents.extensions directory:

"com.unity.ml-agents.extensions": "file:../MyPackages/MLAgents/ml-agents/com.unity.ml-agents.extensions"

To be continued…

Woo hoo! We’ve now finally got ML-Agents set up and we’re ready to training some little AI gamers.

Code Monkey’s ML-Agents video tutorial is great, and I would highly recommend checking it out. I hope this post is a helpful supplement to that tutorial for anyone else working in Linux, or anyone else who likes having written steps to refer to as well.

I’ll try to post the next segments soon, which will go over a basic intro to actually creating and using ML-Agents in your Unity game.

       ___-___
      |  o o  |
       =======
   >--/.......\--<
      \,,,,,,,/
       ()) ())

10-07-21

Resources for using Git with Unity

I’ve recently started learning Unity, which has been a ton of fun. I’ve been following some online guides and tutorials, and I started building a small game as a personal project. I’ve been enjoying it a lot, and I decided I was interested in building a game as part of the OSU Senior Capstone project. I found a team who in also interested in game dev, and I’m excited to learn some new skills!

For both my personal projects and our team’s collaborative project, I want to use Git and GitHub, so I’ve been learning how to set up Unity and Git to work well together. Here are some resources I found helpful for adding Git version control to your Unity project.

1. This article by Rick Reilly

thoughtbot.com/blog/how-to-git-with-unity

I found this to be an incredibly helpful walk-through on how to set up Git with Unity. Rick first explains some common problems faced when using Git with Unity, to identify what problems we’ll need to solve with our setup. The article then lays out 3 solutions to resolve these issues, with easy steps to follow for each one:

  1. Add Unity-specific .gitignore settings
  2. Configure Unity for version control
  3. Use Git Large File Storage

2. This .gitignore template

https://github.com/github/gitignore/blob/master/Unity.gitignore

GitHub provides this super handy .gitignore template specifically for Unity projects. GitHub provides lots of other .gitignore templates as well! Check them out here: https://github.com/github/gitignore

Sort of related note: I often find myself needing to ignore already-tracked files in my Git repos (because thinking ahead is hard). If you ever need to stop tracking and start ignoring a tracked file in your repo (without deleting it from your local files), you can add the file to your .gitignore file and use the command git rm --cached to stop tracking it.

3. The Git LFS site

https://git-lfs.github.com

The Git Large File Storage site provides installation instructions and downloads to get you set up using Git LFS.

There are 2 basic steps to set up Git LFS:

  1. Install Git LFS
  2. Select which file types you’d like Git LFS to manage.

You can track specific file types with Git LFS by adding a .gitattributes file to your repository. Rick Reilly’s post helpfully includes a sample .gitattributes file you can use as a template.

You can alternatively track file types using the command git lfs track, i.e. git lfs track "*.psd".

To check the status of your repo’s Git LFS, use the command git lfs status.

4. The Unity Manual

https://docs.unity3d.com/Manual/ExternalVersionControlSystemSupport.html

The Unity Manual provides some (limited) information on setting up external version control systems for your projects. It doesn’t seem to mention Git anywhere, but it does provide steps for configuring Unity to use an external control system.

Have fun!

           ?
         __|__
       /  + +  \
       \ ::::: /
   )==||```````||==(
       \_______/
       // ||  \\
      []] []] []]

09-28-21