Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
Servers and Cloud Computing blog Arm CMN S3: Driving CXL storage innovation
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • High Performance Computing (HPC)
  • Edge Computing
  • Neoverse
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Arm CMN S3: Driving CXL storage innovation

John Xavier Lionel
John Xavier Lionel
February 24, 2025
3 minute read time.

The Arm CMN S3 (Coherent Mesh Network) interconnect is at the forefront of enabling high-performance, scalable, and efficient solutions for modern compute and storage architectures. With its ability to seamlessly manage coherent communication and memory sharing, the Neoverse CMN S3 is a transformative solution for modern storage architectures. Purpose-built to support Compute Express Link (specifications, CMN-S3 facilitates seamless communication between compute and storage, enabling next-generation devices.

As data-centric applications grow, technologies like Compute Express Link (CXL) are revolutionizing the storage landscape. Neoverse CMN S3 plays a pivotal role in enabling high-performance, scalable storage devices configured as CXL Type 1 (accelerators) and Type 3 (memory expanders). By bridging compute and storage, Neoverse CMN S3 addresses the demands for efficiency, low latency, and coherence in modern data centers thus delivering unmatched performance and flexibility for today’s data-intensive workloads.

The adoption of CXL-enabled storage solutions is accelerating, with hyperscalers, enterprise IT providers, and cloud service providers all recognizing the benefits of memory disaggregation. CMN-S3, with its ability to efficiently route, share, and manage memory resources, is a key enabler of this paradigm shift.

Looking ahead, we can expect CMN-S3 to drive innovation in areas for example:

  • Composable Infrastructure: Disaggregating compute and memory resources to build highly efficient, scalable architectures
  • AI and HPC Storage Acceleration: Optimizing data access speeds for workloads requiring vast amounts of memory
  • Edge Computing: Enhancing storage solutions for latency-sensitive edge applications where real-time data processing is critical

By integrating CXL Type 3 capabilities, CMN-S3 is helping redefine the storage landscape, ensuring that future storage architectures are more flexible, scalable, and cost-effective. As the demand for high-performance, memory-centric computing grows, solutions like CMN-S3 will be at the core of this technological evolution.

Neoverse CMN S3 overview

Neoverse CMN S3 supports CXL.mem(Type3) and CXL.cache(Type1) functionality and is complaint with CXL specification revision 2.0 and CXL 3 revision 3.0 version. The CMN S3 has 32 CCG devices (CXL Gateway) supporting Coherent Mesh Link(CML_SMP) or CXL3.0 and have 512-bit CXS Issue B interface.

A diagram showing the CMN S3 with branches for high perf, secure and scalable interconnect.

The Neoverse CMN S3 also supports homogeneous and heterogeneous multi-chip system topologies. CXL Type1 devices can be attached to CMN S3 where each CXL type1 device is considered as a caching agent, and since CMN S3 is CXL3.0 complaint it also supports multiple caching agents behind a CXL cache port.

Homogeneous multi-chip system

A diagram showing the homogeneous multi-chip system.

 This example system has the following characteristics:

P0 P1 P2 P3
•  128 RN-Fs
•  16 RN-Is
•  4 PCIe RN-Is
•  4 coherency domains
•  128 RN-Fs
•  16 RN-Is
•  4 PCIe RN-Is
•  4 coherency domains
•  128 RN-Fs
•  16 RN-Is
•  
4 PCIe RN-Is
•  
4 coherency domains
•  128 RN-Fs
•  16 RN-Is
•  4 PCIe RN-Is
•  4 coherency domains
•  12 CXL Type1 devices
•  2 CXL Type3 devices attached 
•  12 CXL Type1 devices
•  1 CXL Type3 device attached
•  12 CXL Type1 devices
•  1 CXL Type3 device attached
•  12 CXL Type1 devices
•  2 CXL Type3 devices attached 

The Neoverse CMN S3 also supports security features like and host side data encryption for data stored on a CXL type 3 device. Host side MPE is expected to be enabled when a CXL switch and/or CXL devices are not part of Host’s trusted compute base.

The CMN S3 enables CXL Dynamic Capacity Devices which address the growing need for flexible and scalable memory solutions in modern data centers. A key use case involves memory pooling, where multiple servers dynamically access a shared memory resource to handle fluctuating workloads. With CMN-S3’s advanced coherence management and low-latency interconnect, these devices enable real-time scaling of memory resources, ensuring optimal performance without overprovisioning. This makes CMN-S3 integral to supporting demanding applications like AI training, real-time processing, in-memory databases, and multi-tenant cloud environments while reducing costs and enhancing resource utilization.

Kioxia: A pioneer in CXL and Neoverse CMN S3 adoption

Kioxia, a global leader in memory solutions, has embraced Neoverse CMN S3 for its next-generation CXL devices. The company is developing advanced storage architectures, leveraging Neoverse CMN S3 to deliver shared memory solutions with high bandwidth and low latency.

Kioxia aims to commercialize CXL-enabled memory technologies that transform pooled storage and data center scalability. This underscores the industry’s shift toward coherent, high-performance CXL ecosystems powered by interconnect solutions.

Anonymous
  • John Xavier Lionel
    John Xavier Lionel 6 months ago

    All good.

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
Servers and Cloud Computing blog
  • Migrating our GenAI pipeline to AWS Graviton powered by Arm Neoverse: A 40% cost reduction story

    Hrudu Shibu
    Hrudu Shibu
    This blog post explains how Esankethik.com, an IT and AI solutions company, successfully migrated its internal GenAI pipeline to AWS Graviton Arm64.
    • August 28, 2025
  • Using GitHub Arm-hosted runners to install Arm Performance Libraries

    Waheed Brown
    Waheed Brown
    In this blog post, learn how Windows developers can set up and use Arm-hosted Windows runners in GitHub Action.
    • August 21, 2025
  • Distributed Generative AI Inference on Arm

    Waheed Brown
    Waheed Brown
    As generative AI becomes more efficient, large language models (LLMs) are likewise shrinking in size. This creates new opportunities to run LLMs on more efficient hardware, on cloud machines doing AI inference…
    • August 18, 2025