ML/RL in Rocket League


The baseline

I was recently introduced to the RLBot community. It’s an open-source AI-development community focused on creating the most effective Rocket League bots.

The bot that caught my attention is Necto. Using RLGym Necto trained itself to play Rocket League from scratch. RLGym allows for expedited game lengths allowing training to happen at an increased rate of 18hrs of game time for every 5 minutes. This is a similar approach to OpenAI’s famous Dota 2 bot OpenAI Five.

Rocket League competitive ranks from lowest to highest rank.
Bronze, Silver, Gold, Platinum, Diamond, Champion, Grand Champion, and Supersonic Legend.

Necto places somewhere around the middle of this chart of Rocket League’s competitive ranks, specifically, at the rank of Diamond. It is strongest in a one versus one scenario (called a “duel” or more colloquially, “1v1”) and I have personally lost to it every time I’ve played.

Next Steps?

This is the extent of my ML/AI knowledge, but I am increasingly interested in creating a bot that picks up where Necto left off. Apparently, Necto’s learning stagnated at some point (couldn’t tell you why), and a Necto V2, aka “Nexto” is currently being trained.

After playing multiple matches against Necto, it has become clear that certain aspects of gameplay are being ignored by the bot. Using both the wall and taking to the air are mostly ignored by the bot. There are a few other types of game mechanics that could stand to be integrated into the bot’s arsenal of techniques.

Flip Reset, an advanced Rocket League mechanic

Seeing as the bot was self-taught, a good goal would be to define certain behaviors that we know to be effective. A next step would be to set up highly structured learning environments where the specific behaviors are trained, focusing on one type of mechanic at a time (for example fast aerials, then half-flips, then wave-dash, etc). Once a specific target has been reached, a new mechanic is integrated.

This is my naive take on the whole thing. The bot is self-trained and is choosing the best options against its experience. Randomness is also introduced to allow for growth, however, starting from what we know as current best practice would be ideal. Excited to learn more about this! If you have any helpful information I’d love to hear more!

Print Friendly, PDF & Email
,

Leave a Reply

Your email address will not be published. Required fields are marked *