Co-authors: Bhumik Patel – Director, Software Ecosystem, Tim Thornton – Director, Arm Engineering
Arm & AWS have partnered to drive rapid silicon design using Electronic Design Automation (EDA) software and tools by providing the benefits of cloud computing on Arm Architecture for this complex and costly development process. These benefits include the ability to:
Arm and AWS have partnered with the EDA ecosystem to bring the required software and tools to the newly launched Arm Neoverse based Amazon Elastic Compute Cloud (EC2) M6g, C6g, and R6g instances. These instances are powered by AWS Graviton2 processors providing up to 40% better price-performance than current x86-based instances. This joint video provides insights into interesting use cases that benefit from executing EDA software on these new Arm-based instances in AWS.
In partnership with Cadence Design Systems, Arm’s Engineering group has showcased cost per throughput benefits for running Cadence Xcelium on AWS Graviton2. Cadence Xcelium is a design verification and debugging tool used extensively in silicon design. We showcase real production validation jobs used to test the Arm Cortex-A53 design, comparing runtime and performance per vCPU metrics for Arm-based Amazon m6g instances and x86 based Amazon m5 instances. The following figure shows that the job time required for the largest M6g instances (16xl) is just 2% slower than the largest M5 instances available (24xl), even though M6g.16xl instances have 64 vCPUs and M5.24xl instances have 96 vCPUs. As shown in Figure 2, with 33% fewer vCPUs, when we look at the runtime per single vCPU, M6g requires 32% less time per vCPU to execute the job.
Figure 1: Comparison of Cadence Xcelium Simulation Runtime between M6g and M5 Instances
Figure 2: Comparison of Cadence Xcelium Simulation Rumtime per vCPU between M6g and M5 Instances
In Partnership with Mentor, Arm’s Engineering group used Mentor Graphics Questa Advanced Simulator (QuestaSim) to validate the Arm Cortex-M55 core RTL on Amazon M6g instances. The following figure showcases the AWS architecture leveraged by Arm’s engineering team for the production deployment of Mentor QuestaSim. As shown, the AWS Batch service is leveraged to scale large number of Amazon M6g instances to run Mentor Questa in container images with high throughput.
As shown in figures 3 & 4 below, M6g instances complete in 20% less time and at 36% lower cost on a per vCPU basis compared to M5 instances for this test case. As an example, with our project’s nightly simulations using multiple AWS instances came out to be $226.84 using M5 instances and $145.34 using M6g instances calculated with AWS list prices.
Figure 3: Comparison of Mentor QuestaSim Runtime between M6g and M5 Instances
Figure 4: Comparison of Mentor QuestaSim Runtime Cost between M6g and M5 Instances
Our partners Cadence and Mentor have made their simulators available on Arm-based servers to enable their customers to benefit from similar efficiencies to those that Arm’s Engineering group has showcased for their internal use.
In addition to the availability of the EDA software tools themselves, flow design is also important while developing silicon. Hardware engineers rely on a “flow” – a software framework that brings together the prerequisites for running the tool, an execution environment that can take the hundreds or thousands of tests that a project might have defined and dispatch them all to appropriate compute resources, and a mechanism to collate the results from all these tests to present back to the hardware engineer. These flows can often be different from one project team to another and are commonly coupled tightly to the available infrastructure. This can expose performance limitations in the engineering compute environment and make it difficult to move a flow that runs in an on-premises datacentre to another location. Traditional flows are the antithesis of cloud native. With different flows in different teams, it also makes it harder for engineers to switch between projects, and results in a lot of duplicated development work.
Arm recognised this as an issue some years ago and embarked on a project to define a common flow that would support the way that multiple teams use EDA tools. Initially, the goal was to reduce the engineering time spent on flow maintenance and training, but it was also seen as a great enabler for enhancing the efficiency of simulation runs. Having a common flow that is mature, enables engineers to leverage the benefits of the cloud meaningfully as the self-contained environment is abstracted from infrastructure dependencies.
By leveraging Graviton2 in the AWS Cloud Arm has been able to increase EDA simulation run flexibility while reducing cost. Arm’s journey in bringing rapid innovation to the silicon design continues and we are excited to work with our ecosystem partners and customers to enable this transformation.
For customers and partners interested in learning more on these use cases, please join the joint Arm and AWS session ‘Chip Design on Arm in the Cloud’ at Arm DevSummit which will be held virtually on October 6th.
For more information and to engage with Arm directly for your deployment, please email us.
[CTAToken URL = "https://community.arm.com/developer/tools-software/hpc/b/hpc-blog/posts/designing-arm-processors-using-arm-processors-in-the-aws-cloud?_ga=2.168061153.1784439336.1597672936-225901144.1543880409" target="_blank" text="Learn More About EDA on Arm" class ="green"]