Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
Servers and Cloud Computing blog Arm on Arm: Cadence Characterization in the AWS Cloud
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • High Performance Computing (HPC)
  • EDA & tools
  • Electronic Design Automation (EDA)
  • infrastructure
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Arm on Arm: Cadence Characterization in the AWS Cloud

Darren Cepulis
Darren Cepulis
October 20, 2019
5 minute read time.

The strong growth in HPC, ML/AI, and Big data analytics sectors has mainly been driven by the explosion of new use cases across numerous businesses and areas of research.  In order to stay competitive and sustain progress, IT departments must look at new technologies and news ways of managing the growing demands of their users.  Here we look at the suitability of AWS and Arm-based platforms for certain EDA applications and we focus specifically on the Cadence Liberate Trio Suite of digital design tools and the Amazon EC2 A1 instances which leverage Arm Neoverse cores.  The Cadence Liberate Trio Characterization Suite is important to Arm in that it delivers digital design characterization, validation, and modeling in support of Arm’s core compute IP business.  

First off, a quick note of some of the motivations, challenges, and suitability that one must consider when looking at leveraging a Cloud approach for compute intensive workloads such as those in the EDA and HPC segments.

AWS Cloud Benefits and Challenges for HPC/EDA applications

  • Immediate availability of extensive compute resources (no waiting in line for other jobs to finish)
  • Potential to use Spot-market pricing structures, saving up to 90% vs On-Demand pricing, but with a small chance of your job being interrupted
  • Flexibility to scale up or down quickly as new applications and/or new data arrives.
  • Data sizes and movement should be carefully planned as data transfer times and costs can add up.
  • Not all applications may scale well or perform well in the cloud.

On the Arm side, we heavily appreciate the turn-around time (TAT) reduction we achieve using Cloud.  Projects traditionally planned to run characterization for 4 months using 4k slots now get done in less than a month with 20k slots.  This has a huge positive impact on our business.

Cloud-suitable HPC applications

Embarrassingly parallel or coarse grained parallel HPC applications where compute processing time dominates over communications or IO processing are usually well-suited for cloud deployments, as are applications where compute threads are loosely coupled and amenable to be scaled-out.   

Some good examples: Monte Carlo simulations for financial risk analysis and portfolio management, image processing, seismic field analysis, and bioinformatics, EDA design analysis and big data analytics.

Porting to the Cloud

Moving applications from one computing architecture to another, aka porting, can often be a simple endeavor, with the process being as easy as a re-compile of the code with a new compiler and a new set of flags.  Arm, having a mature set of commercial and open-source HPC compiler toolchains, sees this as the norm when working with end-users. Moving an application to a cloud platform is usually a straight forward task as well, but it does require some additional planning and optimization of the application in order to succeed, as well as in-house software layers and tools for interfacing and scaling up the apps on Cloud services.

Having an application be cloud enabled is a must these days for enterprise IT professionals.  Cadence has taken this to heart with its Liberate Trio suite by validating the scalability of the application across many CPU’s, supporting cloud-based licensing checks, and providing workload management tools that run well on the leading cloud providers for the most efficient use of cloud resources.  Cadence have also optimized their workload with a Unified Flow which allows more characterization compute to occur within a single run and that can speed up the overall job by up to 30%.

Arm in the Cloud

In 2018, Amazon deployed its Arm-based Nitro System across its entire AWS cloud platform in a strong push for cloud performance.  Nitro seamlessly enables, among other things, high-speed NVMe storage, virtualized instances running at near bare-metal speeds, and significantly accelerated secure, elastic storage.  Then, toward the end of 2018, Amazon AWS announced their Arm-based A1 “Graviton” instances, which are now available for general applications. Amazon touts up to 45% in cost savings when running various prevalent workloads on A1 instances vs their traditional x86 instances.  The Nitro performance features and the Graviton cost savings have Arm IT interested in running applications on Arm platforms in the AWS cloud, with EDA codes being one of the initial application segments targeted.  Besides being a key component of Arm’s core IP business, many of the EDA apps map well to the new AWS resources.  Looking at the requirements for the Cadence Liberate Trio characterization suite, we can see that A1 instances are well aligned in terms of system CPU and memory requirements. Other EDA or HPC applications may need larger resources, but we anticipate that future AWS Arm-based instances may support those.  Please stay tuned in that regard.

Cost Analysis

Finally, let’s take a quick look at some of the results measured so far with Cadence on Arm in the cloud.  Noting that these first Arm-based A1 instances perform less than larger, more power hungry x86 cores, they are also significantly less expensive to deploy.  The chart below shows both execution time as well as instance costs.  While one would need more than two A1 instances for a single x86 instance, the fact that the Cadence characterization workload scales quite well and aligns well to an A1 instance’s resources, means that users have a better choice.  The bottom line for this specific EDA application is that the current A1 Arm-based instances provide a 35% cost savings over other current offerings and we expect things to only get better from there in favor of Arm.  

Conclusion

In this blog, we focused on the cloud-optimized Cadence Liberate Trio Characterization Suite which is important to Arm in that it delivers characterization, validation, and modeling in support of Arm’s core IP business.  While Arm itself is interested in capturing the benefits of running its design processes on Arm-based server platforms in our data-centers, we are also interested in leveraging the cloud for the reasons mentioned above.  With Cadence and the Amazon EC2 A1 instances which are powered by the Arm-based AWS Graviton Processors, we can now do both.  Arm IT will itself leverage that savings to provide the basis for future cloud based compute.

 

All .PPT Slide images courtesy of Ajay Chopra, Arm and Seena Shankar, Cadence as presented at the recent TSMC Open Innovation Platform Ecosystem Forum in Santa Clara, CA.

Anonymous
Servers and Cloud Computing blog
  • How SiteMana scaled real-time visitor ingestion and ML inference by migrating to Arm-based AWS Graviton3

    Peter Ma
    Peter Ma
    Migrating to Arm-based AWS Graviton3 improved SiteMana’s scalability, latency, and costs while enabling real-time ML inference at scale.
    • July 4, 2025
  • Arm Performance Libraries 25.04 and Arm Toolchain for Linux 20.1 Release

    Chris Goodyer
    Chris Goodyer
    In this blog post, we announce the releases of Arm Performance Libraries 25.04 and Arm Toolchain for Linux 20.1. Explore the new product features, performance highlights and how to get started.
    • June 17, 2025
  • Harness the Power of Retrieval-Augmented Generation with Arm Neoverse-powered Google Axion Processors

    Na Li
    Na Li
    This blog explores the performance benefits of RAG and provides pointers for building a RAG application on Arm®︎ Neoverse-based Google Axion Processors for optimized AI workloads.
    • April 7, 2025