The strong growth in HPC, ML/AI, and Big data analytics sectors has mainly been driven by the explosion of new use cases across numerous businesses and areas of research. In order to stay competitive and sustain progress, IT departments must look at new technologies and news ways of managing the growing demands of their users. Here we look at the suitability of AWS and Arm-based platforms for certain EDA applications and we focus specifically on the Cadence Liberate Trio Suite of digital design tools and the Amazon EC2 A1 instances which leverage Arm Neoverse cores. The Cadence Liberate Trio Characterization Suite is important to Arm in that it delivers digital design characterization, validation, and modeling in support of Arm’s core compute IP business.
First off, a quick note of some of the motivations, challenges, and suitability that one must consider when looking at leveraging a Cloud approach for compute intensive workloads such as those in the EDA and HPC segments.
On the Arm side, we heavily appreciate the turn-around time (TAT) reduction we achieve using Cloud. Projects traditionally planned to run characterization for 4 months using 4k slots now get done in less than a month with 20k slots. This has a huge positive impact on our business.
Embarrassingly parallel or coarse grained parallel HPC applications where compute processing time dominates over communications or IO processing are usually well-suited for cloud deployments, as are applications where compute threads are loosely coupled and amenable to be scaled-out.
Some good examples: Monte Carlo simulations for financial risk analysis and portfolio management, image processing, seismic field analysis, and bioinformatics, EDA design analysis and big data analytics.
Moving applications from one computing architecture to another, aka porting, can often be a simple endeavor, with the process being as easy as a re-compile of the code with a new compiler and a new set of flags. Arm, having a mature set of commercial and open-source HPC compiler toolchains, sees this as the norm when working with end-users. Moving an application to a cloud platform is usually a straight forward task as well, but it does require some additional planning and optimization of the application in order to succeed, as well as in-house software layers and tools for interfacing and scaling up the apps on Cloud services.
Having an application be cloud enabled is a must these days for enterprise IT professionals. Cadence has taken this to heart with its Liberate Trio suite by validating the scalability of the application across many CPU’s, supporting cloud-based licensing checks, and providing workload management tools that run well on the leading cloud providers for the most efficient use of cloud resources. Cadence have also optimized their workload with a Unified Flow which allows more characterization compute to occur within a single run and that can speed up the overall job by up to 30%.
In 2018, Amazon deployed its Arm-based Nitro System across its entire AWS cloud platform in a strong push for cloud performance. Nitro seamlessly enables, among other things, high-speed NVMe storage, virtualized instances running at near bare-metal speeds, and significantly accelerated secure, elastic storage. Then, toward the end of 2018, Amazon AWS announced their Arm-based A1 “Graviton” instances, which are now available for general applications. Amazon touts up to 45% in cost savings when running various prevalent workloads on A1 instances vs their traditional x86 instances. The Nitro performance features and the Graviton cost savings have Arm IT interested in running applications on Arm platforms in the AWS cloud, with EDA codes being one of the initial application segments targeted. Besides being a key component of Arm’s core IP business, many of the EDA apps map well to the new AWS resources. Looking at the requirements for the Cadence Liberate Trio characterization suite, we can see that A1 instances are well aligned in terms of system CPU and memory requirements. Other EDA or HPC applications may need larger resources, but we anticipate that future AWS Arm-based instances may support those. Please stay tuned in that regard.
Finally, let’s take a quick look at some of the results measured so far with Cadence on Arm in the cloud. Noting that these first Arm-based A1 instances perform less than larger, more power hungry x86 cores, they are also significantly less expensive to deploy. The chart below shows both execution time as well as instance costs. While one would need more than two A1 instances for a single x86 instance, the fact that the Cadence characterization workload scales quite well and aligns well to an A1 instance’s resources, means that users have a better choice. The bottom line for this specific EDA application is that the current A1 Arm-based instances provide a 35% cost savings over other current offerings and we expect things to only get better from there in favor of Arm.
In this blog, we focused on the cloud-optimized Cadence Liberate Trio Characterization Suite which is important to Arm in that it delivers characterization, validation, and modeling in support of Arm’s core IP business. While Arm itself is interested in capturing the benefits of running its design processes on Arm-based server platforms in our data-centers, we are also interested in leveraging the cloud for the reasons mentioned above. With Cadence and the Amazon EC2 A1 instances which are powered by the Arm-based AWS Graviton Processors, we can now do both. Arm IT will itself leverage that savings to provide the basis for future cloud based compute.
All .PPT Slide images courtesy of Ajay Chopra, Arm and Seena Shankar, Cadence as presented at the recent TSMC Open Innovation Platform Ecosystem Forum in Santa Clara, CA.