Just five years from now, by 2025, sensors will create exabytes of data per day (according to IBS and Seagate) that will be transmitted through next-generation networks with the lowest latencies possible. Zettabytes of data will be stored in the global datasphere, and consumer expectations for instantaneous responses to all their needs will be increasingly prevalent. To achieve that, networks, storage and compute must “hyper-scale” to speeds and capacities that are hard to comprehend, coining the term, “hyperscale computing”. It will be one of the big topics at the Arm DevSummit, impacting all our lives at consumers.
And hyperscale computing covers much more than the data center aspects. At Cadence, we think about this as the cycle of sensing data, transmitting it through networks, and processing and storing it to eventually make sense of that data to create actionable results. When looking at the associated technologies enabling hyperscale computing, the Arm-Cadence partnership touches nearly every aspect of this cycle. It can been seen in sensors and their analog/mixed-signal challenges through next-generation networks as we are rolling out 5G to the data centers in which high-performance computing is happening. The industry is witnessing fundamental transformation of compute, storage, memory, and networking as outlined in The Four Pillars Of Hyperscale Computing. Where data is processed outside the data center—at the inner edge, or outer edge, or the sensing nodes—is highly dependent on the application requirements and depends on the latency requirements at which users expect results. The following graph illustrates the journey of data from sensors, through networks to the data center, together with some of the latencies that users must expect depending on where data is processed.
At this point, we consumers all expect that data from our fitness trackers, consumer behaviors, and driving behaviors are sent into the cloud for processing. Cloud usage has quite a profound impact on EDA as well.
At CadenceLIVE Americas, Nafea Bshara from Amazon AWS, co-founder of Annapurna Labs, talked in some detail about their own, as well as their customer’s, cloud usage. Some of the data is summarized in a blog post called “Climbing Annapurna to the Clouds”, including some of the actual customer usage data. For the next chip, at more advanced technology, customers spent less overall by optimizing the usage of newer, faster servers.
Cost per throughput is the metric to watch. And availability versus demand considerations can add flexibility. Bshara described how, during certain times of a project, Amazon AWS spot instances—spare capacity—were leveraged at up to 90 percent lower cost. In addition, users have the flexibility in the cloud to choose instances that are nominally slower but provide better cost per throughput. It is all demand based, almost like theatre tickets as I outlined in “What “Hamilton – An American Musical” Tickets and Emulation Have in Common” quite some time back.
That said, Arm, Cadence and AWS have partnered to make key EDA capabilities available on AWS Graviton2 in the cloud as well. The focus is on the tools that consume the most cycles—simulation and characterization, specifically Xcelium Logic Simulation, Liberate Characterization and Spectre® Simulation. In a session called “Scalable Cloud-Based Simulation and Characterization”, Arm’s Bhumik Patel, my colleague Brandon Bautz and myself will show some of the results during the Arm DevSummit. More resources below.
Here is to our hyperscale computing future, and EDA tools being available on Arm architecture as well.