Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
Servers and Cloud Computing blog Hyperscale Computing and EDA in the Cloud on Arm Servers
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • Cloud Computing
  • Simulation
  • infrastructure
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Hyperscale Computing and EDA in the Cloud on Arm Servers

Frank Schirrmeister
Frank Schirrmeister
October 8, 2020
3 minute read time.

Just five years from now, by 2025, sensors will create exabytes of data per day (according to IBS and Seagate) that will be transmitted through next-generation networks with the lowest latencies possible. Zettabytes of data will be stored in the global datasphere, and consumer expectations for  instantaneous responses to all their needs will be increasingly prevalent. To achieve that, networks, storage and compute must “hyper-scale” to speeds and capacities that are hard to comprehend, coining the term, “hyperscale computing”. It will be one of the big topics at the Arm DevSummit, impacting all our lives at consumers.

And hyperscale computing covers much more than the data center aspects. At Cadence, we think about this as the cycle of sensing data, transmitting it through networks, and processing and storing it to eventually make sense of that data to create actionable results. When looking at the associated technologies enabling hyperscale computing, the Arm-Cadence partnership touches nearly every aspect of this cycle. It can been seen in sensors and their analog/mixed-signal challenges through next-generation networks as we are rolling out 5G to the data centers in which high-performance computing is happening. The industry is witnessing fundamental transformation of compute, storage, memory, and networking as outlined in The Four Pillars Of Hyperscale Computing. Where data is processed outside the data center—at the inner edge, or outer edge, or the sensing nodes—is highly dependent on the application requirements and depends on the latency requirements at which users expect results. The following graph illustrates the journey of data from sensors, through networks to the data center, together with some of the latencies that users must expect depending on where data is processed.


This is a diagram of a data sphere.

At this point, we consumers all expect that data from our fitness trackers, consumer behaviors, and driving behaviors are sent into the cloud for processing. Cloud usage has quite a profound impact on EDA as well.

At CadenceLIVE Americas, Nafea Bshara from Amazon AWS, co-founder of Annapurna Labs, talked in some detail about their own, as well as their customer’s, cloud usage. Some of the data is summarized in a blog post called “Climbing Annapurna to the Clouds”, including some of the actual customer usage data. For the next chip, at more advanced technology, customers spent less overall by optimizing the usage of newer, faster servers.

Cost per throughput is the metric to watch. And availability versus demand considerations can add flexibility. Bshara described how, during certain times of a project, Amazon AWS spot instances—spare capacity—were leveraged at up to 90 percent lower cost. In addition, users have the flexibility in the cloud to choose instances that are nominally slower but provide better cost per throughput. It is all demand based, almost like theatre tickets as I outlined in “What “Hamilton – An American Musical” Tickets and Emulation Have in Common” quite some time back.

That said, Arm, Cadence and AWS have partnered to make key EDA capabilities available on AWS Graviton2 in the cloud as well. The focus is on the tools that consume the most cycles—simulation and characterization, specifically Xcelium  Logic Simulation, Liberate Characterization and Spectre® Simulation. In a session called “Scalable Cloud-Based Simulation and Characterization”, Arm’s Bhumik Patel, my colleague Brandon Bautz and myself will show some of the results during the Arm DevSummit. More resources below.

Here is to our hyperscale computing future, and EDA tools being available on Arm architecture as well.

Resources:

  • AWS Goes All in on Arm-Based Graviton2 Processors with EC2 6th Gen Instances
  • Designing Arm Processors Using Arm Processors in the AWS Cloud
  • Arm on Arm: Cadence Characterization in the AWS Cloud
  • Xcelium Is 50% Faster on AWS's New Arm Server Chip
Anonymous
Servers and Cloud Computing blog
  • Advancing Chiplet Innovation for Data Centers: Novatek’s CSS N2 SoC in Arm Total Design

    Marc Meunier
    Marc Meunier
    Novatek’s CSS N2 SoC, built with Arm Total Design, drives AI, cloud, and automotive innovation with chiplet-based, scalable compute.
    • September 24, 2025
  • How we cut LLM inference costs by 35% migrating to Arm-Based AWS Graviton

    Cornelius Maroa
    Cornelius Maroa
    The monthly wake-up call. Learn how Arm-based Graviton3 reduced costs 40%, cut power use 23%, and unlocked faster, greener AI at scale.
    • September 24, 2025
  • Hands-on with MPAM: Deploying and verifying on Ubuntu

    Howard Zhang
    Howard Zhang
    In this blog post, Howard Zhang walks through how to configure and verify MPAM on Ubuntu Linux.
    • September 24, 2025