At the Linley Processor Conference earlier this week, I had the opportunity to present the challenges facing architects who are building hardware for distributed cloud intelligence. I also
discussed how you can address these challenges with ARM’s 3rd generation coherent backplane IP; the ARM CoreLink CMN-600 and ARM CoreLink DMC-620. The new on-chip network and memory controller IP has been optimized to boost SoC performance across a broad range of applications and markets including; networking, server, storage, HPC, automotive and industrial.
The need for an intelligent flexible cloud
Not only are we seeing a significant growth in the number of connected devices, but we are also seeing evolving use cases. Virtual reality is hitting the mainstream price points requiring a constant high bandwidth stream of content. Autonomous vehicles are catching a lot of buzz, but we probably will not see truly autonomous vehicles on our streets until ultra-low latency car-to-car communication is deployed. These new use cases will require an intelligent flexible cloud where the applications and services are pushed to the edge of the network.
Blending compute and acceleration from edge to cloud
A new approach will be required to meet the demands of these evolving use-cases. Today system architects are trying to figure out how to maximize efficiency with heterogeneous computing and acceleration (ex: GPU, DSP, FPGA), to optimize systems across a wide range of power and space constraints. During the presentation, I showed three different example design points, each with different needs and constraints. The data center maximizing compute density for a wide variety of workloads, the edge cloud to provide distributed services and the small access point to keep all the end points connected at all times.
New high performance, scalable architecture
These three heterogeneous design points illustrate the targets we set out to address with our 3rd generation coherent backplane IP architecture. Our goal was to maximize compute performance and throughput (a measure of both bandwidth and number of transactions), across a broad range of power and area constraints.
The result is our new CoreLink CMN-600 Coherent Mesh Network and CoreLink DMC-620 Dynamic Memory Controller. Together they have been optimized to provide a fast, reliable on-chip connectivity and memory subsystem for heterogeneous SoCs that blend ARMv8-A processors, accelerators and IO.
Some of the key new capabilities and performance metrics include:
- New scalable mesh network that can be tailored for SoCs from 1 to 32 clusters (up to 128 processors)
- 5x higher throughput than the prior generation and capable of more than 1TeraByte/s of sustained bandwidth
- Higher frequencies (exceeding 2.5 GHz) and 50 percent lower latency
- New Agile System Cache with intelligent cache allocation to enhance sharing of data between processors, accelerators and IO
- Supporting CCIX, the open industry standard for coherent multi-chip processor and accelerator connectivity
- 1 to 8 channels of DDR4-3200 memory and 3D stacked DRAM for up to 1TeraByte of addressable memory per channel
- End-to-end QoS and RAS (Reliability, Availability and Serviceability) supported by the combined CMN-600 and DMC-620 solution
- In-built security with integrated ARM TrustZone Address Space memory protection
- Automated SoC creation with ARM CoreLink Creator and Socrates DE tooling
The following image illustrates how the technology could be used to build a small access point, focused on throughput with efficiency up to the data center, focused on maximizing compute density.
We are really excited to see the continued evolution of these new intelligent, distributed use-cases and we are excited to see how SoC architects will deploy our new technology. Stay tuned as we’ll be continuing to discuss more about the capabilities in the coming months.
If you would like to find out more about the IP, please check out our developer pages below or attend my upcoming technical talk at ARM TechCon, Oct 25-27 2016 in Santa Clara, CA.