As Arm emerges in the HPC market, we are often asked by partners and end-users if we are ready to deploy. Do our partners have HPC applicable platforms in production? And, is a critical mass of foundational HPC software and prevalent HPC applications ready to roll on the Arm architecture?
A key function within Arm is the enablement and advancement of its ecosystem, encompassing architecture and IP, silicon partners, and software tools and application stacks. Arm’s move into the server and HPC markets requires both suitable processor platforms in terms of performance and power-efficiency, as well as key toolchains and ported applications. HPC makes significant demands on a hardware platform’s ability to support large sustained memory and IO band-width, with low latency overheads. It also demands efficient processors cores with strong SIMD and floating point pipelines. We have seen recent public statements and demos from partners such as Cavium (ThunderX2) and Qualcomm (Centriq) that would suggest that highly performant Arm-based hardware for HPC is on the way.
So what about the HPC software stack? Has it been ported? Will it just work? Will applications scale well across all the Arm processor cores? We have much to answer in 2017 ahead of some major deployments, but the good news is that a lot has happened over the past few years to strengthen our HPC software ecosystem. To support this, I can point to a few key Arm software package releases:
These supported commercial packages exist today alongside a growing set of third party foundational software from companies such as IBM, Roguewave, and Bright Computing and a vast, mature set of open source software tools, toolchains (GCC, LLVM), and applications. Now’s the time to put things to the test (or perhaps a short pop quiz is a better analogy).
In order to gauge our progress in this area, we sponsored a neutral third party proof-of-concept project. We approached the folks at the University of Cambridge about kicking the tires on Arm with a prevalent HPC Computation Fluid Dynamics (CFD) application across a small cluster of servers. Cambridge is a long time x86 site and use of Arm based servers would be a novel experience for them. We advised them to use some available production server hardware based on Cavium ThunderX 48-core chips and pointed them to our developer website. Then, we went away and let them do their work. The results are shown in the whitepaper linked here:
[CTAToken URL = "https://developer.arm.com/-/media/developer/products/software-tools/hpc/Documentation/UCAMB_Arm_CFDvFinal.pdf" target="_blank" text="HPC Case Study - CFD Applications on Arm" class ="green"]
TL;DR summary of the whitepaper is as follows:
As we continue to engage with HPC end-users and ISV’s, we highlight the low relative cost of porting to Arm due to our standard architecture and mature toolchains and we expect to see more public concrete examples of our ecosystem readiness in 2017. If you'd like to get involved, join the discussion at our community site Arm HPC Users Group, and publicly contribute to the progress of fully enabling a new architecture for HPC.
Actually the ARM servers is very more expensive than x86. One of the target is too have drastic price reduction.