At Arm, we see neural graphics as a key part of next-generation game development. By combining machine learning with real-time rendering, developers are unlocking new ways to improve image quality across a range of devices. Especially in mobile environments where efficiency is critical.
To support this shift, we have launched a growing set of open-source tools. These tools help developers experiment, train, and deploy their own neural models for real-time graphics workloads. In August, we introduced the first wave of these tools through the Neural Graphics Development Kit.
At the PyTorch conference last week, we introduced developers to the toolkit’s latest addition: the Model Training Gym. In a hands-on demo, we showed how the PyTorch ecosystem powers the entire model training pipeline for this use case.
The response from developers was overwhelmingly positive. One attendee summed up the impact of Neural Super Sampling with:
“It’s amazing that you can get the same image quality using just 25% of the compute.”
The concept of applying ML to graphics tasks resonated across the board, especially among developers familiar with DLSS-style upscaling. Many were surprised by how compact and efficient the models were, and even more intrigued by the ability to train and fine-tune them using PyTorch.
The open-source toolkit helps developers train, fine-tune, and export models for neural graphics, including upscaling techniques like Neural Super Sampling (NSS).
It supports the full machine learning lifecycle for neural graphics. It begins with a PyTorch-based training and evaluation API and ends with export through ExecuTorch. The final output is a .vgf file. This is a model format for deploying models into real-time rendering pipelines on Arm-based hardware with neural accelerators. Dedicated neural accelerators are coming to future Arm GPUs, but developers can start building and testing today with the current toolchain.
To support debugging and visualization, the workflow includes a branch of the Model Explorer. This lets you inspect the .vgf model’s structure and graph connectivity before deployment.
In the NSS use case, a high-quality image is reconstructed from only a fraction of the original pixels. This shows how learned upscaling preserves detail and reduces rendering cost.
You can explore the Model Gym through Jupyter notebooks or the CLI. This makes it easy to prototype quickly or integrate into automated pipelines. For developers who are already familiar with PyTorch, the experience feels intuitive from the start.
Whether you joined us at the event or are just hearing about this for the first time, the good news is that everything we showcased is open source and available today. This includes the code, the models, and the training flow.
A new Arm Learning Path is available to guide you through each step. You learn how to train a neural upscaling model, evaluate it, and export it to .vgf, gaining practical insight into how neural graphics models are developed and deployed.
As we look ahead to 2026 and the arrival of dedicated neural accelerators in Arm GPUs, the Model Gym lets developers start building today. If you are curious about applying ML beyond traditional tasks and into the world of real-time rendering, the Model Gym is your invitation to explore.
Start the Learning Path and explore what is possible with neural graphics on Arm.
Fine-tuning neural graphics models with Model Gym Learning Path