The world is generating more data than ever before, and the demand for compute power is only increasing. This year alone, we are expecting over 120 Zettabytes of new data to be created. Traditional server architectures are struggling to keep up with the pace of change, as single-core performance from traditional process improvements plateaus. New Innovative server designs are transforming the traditional datacenter model. They are leveraging specialized processing for specific workloads.
Data Processing Units (DPUs) are an example of exactly this phenomenon – innovation around specialization, being used to transform the status quo. DPUs are specialized compute that offload infrastructure tasks from the main CPU. This frees up the CPU for client applications and leads to significant performance improvements. DPUs come in a variety of shapes and sizes, but they all have one thing in common: they perform specific tasks with less energy and higher performance compared to running on the main CPU. DPUs are used today to accelerate networking, security, and storage tasks.
In addition to performance benefits, DPUs also offer a number of other advantages. For example, they help improve data center security by providing physical isolation for running infrastructure tasks. They also help to reduce latency and improve performance for applications that require real-time data processing. And since DPUs create a logical split between infrastructure compute and client applications, the manageability of workloads within different development and management teams is streamlined.
The success of DPUs in hyperscaler server architecture is starting to expand to data centers, telco servers and edge compute. However, these environments have additional challenges to overcome.
In the hyperscalers environment, there is a tight coupling between the hardware and the software, often controlled by the same development team and tailored for a specific application. Outside of the cloud, the environment is much more fragmented. Many more companies are building DPUs with specific accelerators in mind. OEMs are expected to embrace these cards, but this will not happen unless we can agree on a standard for discovery, provisioning, and life-cycle management. Unlike SmartNICs, DPUs have general-purpose cores which make them suitable for layered software development and therefore can support standard abstractions and interfaces.
This is where the Open Programmable Infrastructure (OPI) project from the Linux Foundation comes in. OPI is focused on utilizing open software and standards, as well as frameworks and toolkits, to enable the rapid adoption of DPUs. The OPI Project includes both hardware and software companies who have come together to establish and nurture an ecosystem. Together, they are creating solutions blueprints and standards to ensure that compliant DPUs work with any server. OPI is an open collaborative environment with the right mix of companies to create end-to-end reference designs.
Arm Neoverse platform is at the heart of the majority of DPU designs, leading the HW innovation. We are joining the OPI project to support the SW ecosystem and give them the tools to maximizes the benefits of the latest Arm cores. As a starting point, we are expanding the Arm SystemReady certification to support DPUs. This program aligns the hardware and firmware to provide a proven recipe to seamlessly boot popular Linux distributions. This crucial first step will accelerate the bring-up and simplify the lifecycle management of applications running on DPUs and free up development teams to focus on their core offering.
For more information:
[CTAToken URL = "https://opiproject.org/" target="_blank" text="Visit OPI Project Page" class ="green"]