Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
  • Groups
    • Research Collaboration and Enablement
    • DesignStart
    • Education Hub
    • Innovation
    • Open Source Software and Platforms
  • Forums
    • AI and ML forum
    • Architectures and Processors forum
    • Arm Development Platforms forum
    • Arm Development Studio forum
    • Arm Virtual Hardware forum
    • Automotive forum
    • Compilers and Libraries forum
    • Graphics, Gaming, and VR forum
    • High Performance Computing (HPC) forum
    • Infrastructure Solutions forum
    • Internet of Things (IoT) forum
    • Keil forum
    • Morello Forum
    • Operating Systems forum
    • SoC Design and Simulation forum
    • 中文社区论区
  • Blogs
    • AI and ML blog
    • Announcements
    • Architectures and Processors blog
    • Automotive blog
    • Graphics, Gaming, and VR blog
    • High Performance Computing (HPC) blog
    • Infrastructure Solutions blog
    • Innovation blog
    • Internet of Things (IoT) blog
    • Operating Systems blog
    • Research Articles
    • SoC Design and Simulation blog
    • Tools, Software and IDEs blog
    • 中文社区博客
  • Support
    • Arm Support Services
    • Documentation
    • Downloads
    • Training
    • Arm Approved program
    • Arm Design Reviews
  • Community Help
  • More
  • Cancel
Arm Community blogs
Arm Community blogs
AI and ML blog Part 1: Build dynamic and engaging mobile games with multi-agent reinforcement learning
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI and ML blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded blog

  • Graphics, Gaming, and VR blog

  • High Performance Computing (HPC) blog

  • Infrastructure Solutions blog

  • Internet of Things (IoT) blog

  • Operating Systems blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • Game Developers Conference (GDC)
  • Machine Learning (ML)
  • Unity
  • Graphics and Gaming
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Part 1: Build dynamic and engaging mobile games with multi-agent reinforcement learning

Koki Mitsunami
Koki Mitsunami
August 16, 2023
3 minute read time.
Part 1 of 3 Blog Series


In March 2023, the Game Developers Conference (GDC), one of the biggest events for video game developers, was held in San Francisco. Last year, we showcased a one-on-one boss battle featuring a knight character controlled by an ML-based game AI. That was a single-agent system, but this year, we have expanded the scope to multi-agent systems, presenting a talk and demo called "Candy Clash." Using the Unity ML-Agents Toolkit, we developed the multi-agent system where dozens of rabbit characters work as a team, aiming to crack their opponent's egg. 

In this blog series, I'll explain how we developed this game demo. Part 1 provides a general overview of the demo.

We hope that this blog series will interest many game developers in machine learning technology.

Candy Clash

In the Candy Clash demo, numerous rabbit characters split into two teams and act according to the situation, aiming to crack each other's eggs. The rabbit characters' actions are selected by their assigned neural network (NN) models, and the game was developed to demonstrate how agents behave as a group. Below is a screenshot of the game.

 Candy Clash Demo

Figure 1. Candy Clash demo

The objective of this demo is to either crack the opponent's egg or defeat all the opponent’s rabbits. The gauge at the top of the game screen shows the eggs' hit points (HP). Below that, a gauge shows the remaining number of rabbits for each team. Also, cannons fire towards areas with a high concentration of rabbits, either at regular intervals or when the user presses a button. The cannons are controlled by human-programmed logic rather than ML. This programming indirectly affects the game's outcome. At GDC, we ran this demo on a Pixel 7 Pro and achieved a performance of 60 fps with 100 ML agents.

ML Agents and multi-agent scenarios

ML-Agents Toolkit enables game developers to train intelligent game AI agents within games and simulations. You can create more realistic and engaging gameplay experiences because non-player characters (NPCs) can learn from their surroundings and react more naturally to player inputs. Agents are trained through Reinforcement Learning (RL) using data gathered from their environment. This learning enables users to improve over time and make decisions based on what they have learned. Our previous blog post introduces the basic mechanism used for ML-Agents.  Unity’s official documentation gives more details. 

Multi-agent systems involve multiple ML agents working together to achieve a common goal. This scenario can potentially solve problems that are difficult for single-agent systems. This approach can lead to more complex and dynamic gameplay experiences, because different agents can take on different roles and cooperate or compete with each other. For example:

  • In strategy games, ML agents can be responsible for controlling different units. This makes each game session more unpredictable and challenging.
  • In racing games, ML agents can work together to create a more competitive environment. Each agent can adopt various racing strategies to outperform the others.

By incorporating multi-agent systems, developers can create games that keep players engaged with novel experiences.

Design Approaches for multi-agent system

When developing multi-agent systems, there are several approaches. Your approach depends on:

  • The type of game you are developing
  • The resources available
  • Your implementation ideas

When the game setting is simple, you might create multiple instances of a single agent. In more complex settings, you might want a centralized agent to control all the characters. In recent years, an approach called Centralized Training, Decentralized Execution framework has emerged. In this framework, agents share data during training to learn optimal actions. During inference, agents act independently without sharing data with each other. Another blog post explains how multi-agent works in Unity.

 Various Possible Design Approaches for Multi-Agent

Figure 2. Various possible design approaches for multi-agents

In Candy Clash, there are 3 roles for the rabbit characters:

  • Attacker: The attacker's goal is to crack the opponent's egg.
  • Defender: The defender's goal is to protect their own egg.
  • Wanderer: The wanderer's goal is to defeat the opponent rabbits.

A planner agent dynamically selects the roles assigned to each rabbit character. Through this combination of planner and roles, the rabbits' behaviors change and adapt in real-time to various game situations.

 Our Approach

Figure 3. Our approach for Candy Clash

In part 2, I explore agents’ design in more details.

Anonymous
AI and ML blog
  • Reviewing different Neural Network Models for Multi-Agent games on Arm using Unity

    Sofia Jegnell
    Sofia Jegnell
    During the Game Developer Conference (GDC) in March 2023, we showcased our multi-agent demo called Candy Clash, which is a mobile game containing 100 intelligent agents.
    • September 11, 2023
  • Part 1: Build dynamic and engaging mobile games with multi-agent reinforcement learning

    Koki Mitsunami
    Koki Mitsunami
    Using the Unity ML-Agents Toolkit, we developed Candy Clash demo, the multi-agent system, where dozens of rabbit characters work as a team, aiming to crack their opponent's egg.
    • August 16, 2023
  • Benefit of pruning and clustering a neural network for before deploying on Arm Ethos-U NPU

    George Gekov
    George Gekov
    How to design a neural network so that the compiler is able to efficiently compress the weights of your ML model and benefit fully from the capabilities of the Ethos-U hardware?
    • July 22, 2023