Author: Francisco Socal, Senior Product Manager, Architecture and Technology Group, Arm
In this blog we introduce our approach to accelerators and device virtualization with a view to agreeing standards.
The complexity and performance requirements of computing systems have been growing and demands are further driven by applications, such as ML and the everything-connected world of IoT, with many billions of connected devices. With the slowdown of Moore’s law, accelerators and input/output (I/O) devices are increasingly employed in heterogeneous compute systems and used by software, either to alleviate the overhead of computationally expensive operations or to interface with other systems.
These devices need a well-defined interface to software: the hardware/software interface. The design of a hardware/software interface is critical for the system performance and the ease of software development and deployment.
Virtualization significantly increases the complexity of the hardware/software interface, particularly with regard to accelerators and I/O devices. It's widely used in cloud computing, providing significant economic benefits through techniques like multitenancy and elasticity. Virtualization is now being adopted in other markets as well, such as networking and automotive, making it a key requirement for computing systems across multiple segments and applications.
Optimal performance requires minimizing latency and the software overhead when offloading tasks to accelerators and I/O devices. That is particularly important for small (fine-grained) tasks. In the case of virtualization,optimal performance also requires a flexible approach to share the hardware resources of a physical device across virtual machines and user-space applications with a minimal dependency on the hypervisor.
While different solutions exist today to address these requirements and challenges,we believe there is a need for new industry standards related to the hardware/software interface for accelerators and I/O devices. Standardization enables the development of standard software frameworks and an ecosystem of device/accelerator components, with wider benefits such as interoperability, re-usability, reduced development costs and time-to-market.
In order to drive a discussion within our ecosystem and the broader industry, we developed the Revere Accelerator Management Unit (Revere-AMU) System Architecture. It defines a complete set of system architecture layers for interfacing accelerators and I/O devices, as illustrated in the diagram (below). It builds upon existing industry standards where suitable ones exist, such as the widely deployed AMBA interface protocols, and proposes new standards where we perceive benefits in an alternative.
Revere-AMU is an advanced development effort and an implementation of the architecture will be integrated in several of DARPA’s ERI11 projects. We also see potential use cases for the Revere-AMU that align with Arm’s Neoverse platform roadmap in areas, such as CCIX, and for deploying acceleration functions at the edge.
The Revere-AMU is a vehicle to explore further requirements and solutions, while we collaborate with the industry to identify a standardized solution that can benefit the whole ecosystem.
Set of standardization layers to enable low friction integration of accelerators and the implementation of standard software frameworks
Find out more and download the Revere-AMU specifications.
In this white paper, a layered approach to high performance device virtualization, we discuss
Download White Paper
Please send your feedback to email@example.com