Launched in 2018, Arm Neoverse provides a dedicated infrastructure focused CPU-IP roadmap for high-performance servers to power efficient edge compute platforms to gateways. With the availability of Arm Neoverse based platforms, Independent Software Vendors (ISVs) are enabling customers to leverage the benefits of the Arm architecture. Over the years, ISVs have played a pivotal role to customer’s cloud transformation journey. By transforming the way software is consumed with SaaS subscription models and providing innovative integrations with public cloud services, customers are able to deploy their applications consistently across a multi-cloud environment. ISVs are also playing a pivotal role to customer’s journey with edge computing deployments by providing optimized software that solves the unique challenges associated with computing outside the cloud data centers.
There is significant acceleration in how the infrastructure is being built to be power efficient at the edge and to deliver high performance and scalability in the data center and in the cloud. By enabling the software to be multi-platform, ISVs are expediting this acceleration on how applications are developed for the next generation infrastructure.
By enabling the software for Arm Neoverse platforms, customers can achieve best cost per throughput in the industry for cloud native workloads. In addition, there are software distributions being optimized to meet the changing requirements of next-gen applications and applications that enable edge computing with platform diversity.
In this blog, we take a look at the ISV ecosystem for the cloud, data center, and edge.
The enablement of ISV ecosystem on Arm architecture in the cloud has accelerated since the availability of Arm Neoverse powered AWS Graviton2 processors. These instances provide best performance for a broad spectrum of workloads and has an extensive software ecosystem support. Customers get major performance benefits at significantly lower costs for software workloads such as:
Customers are also able to achieve these performance benefits for their cloud native application development on AWS. This often involves container-based deployments and by leveraging CI/CD software from ISVs such as GitLab, GitHub Actions and Travis CI that provides offerings for the AWS Graviton2 based instances.
Here, we showcase a sample Wikijs application development workflow with GitLab CI/CD on Amazon Elastic Container Service (ECS) as a fully managed container orchestration service. ECS supports the AWS Graviton2 based instances. In addition, Amazon Elastic Container Registry (ECR) is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images on the Arm instances. In this architecture, the application access is served by NGINX Plus and monitored by Datadog agents for the AWS Graviton2 based instances.
Similarly, customers can also develop applications with Amazon Elastic Kubernetes Service (EKS) and Gitlab as shown in more detail here.
Securing containers as part of the modern development process is also key and customers can leverage security scanning with Snyk for the Arm-based instances on AWS. In addition, there is a growing support from ISVs providing monitoring agents such as Datadog, Dynatrace, Chef, Rapid7, and Qualys. ISV ecosystem for deployments on Arm is rapidly growing and provides benefits for broader cloud native use cases.
Application requirements are changing infrastructure designs in the data center and at the edge. In the data center, there is also an increasing focus to allow developers the flexibility they need to run their applications on optimal platforms. They need these provisioning decisions be policy-driven at run time, as much as possible. Virtualized and containerized applications both increasingly require more performance. Relying on CPU cores for all levels of performance is less attractive given the availability of specialized hardware such as GPU, FPGAs, and specialized NICs that can offload activities and free up CPU cycles for applications. One of the keys to realize this vision is to enable the integrations between the software stack and the hardware accelerators that provide the performance and efficiency benefits that applications and developers both require.
To this effect, VMware is leveraging the ESXi on Arm implementation on SmartNICs to drive next-gen architectures with Project Monterey. This provides customers with better TCO and performance by offloading network processing to free up core CPU cycles for top application performance. Project Monterey will also deliver operational consistency across deployments and enable a zero-trust security model. By integrating these infrastructural innovations with the VMware Cloud Foundation, developers benefit from a policy-driven approach to infrastructure.
The applications of ESXi on Arm also extend to the Arm-based platforms at the edge, bringing virtualization benefits of the data center to meet the requirements of edge computing. The Edge is diverse and presents multiple challenges in terms of providing platform standardization and secure solutions for customers to reap the benefits that traditional IT teams have leveraged in the data center. For this, VMware and Arm have partnered to enable a range of ESXi on Arm use cases at the edge.
And today at Arm DevSummit, VMware is announcing availability of ESXi on Arm Fling which is a technical preview for customers to develop and evaluate these use cases. Learn more at this VMware announcement blog.
Just like cloud, containers are also transforming the way edge and IoT platforms have been operated and managed in the past. Providing scalability, manageability, and the ability to deploy general and multi-purpose applications on these devices drives a cloud-like flexibility into the IoT world. At first glance, Kubernetes appears too large and complex for edge and IoT devices, which typically have a smaller resource footprint than in the data center or cloud. Rancher’s K3s, however, is a lightweight, easy to install Kubernetes distribution geared toward resource-constrained environments and low touch operations - particularly edge and IoT environments and is optimized for the Arm architecture.
Another benefit of ISVs providing optimized software for Arm Neoverse platforms is the ability to perform Machine Learning and AI for real time applications with low latency requirements. There are real challenges to be solved by building an AI inference engine for real-time applications such as fast end-to-end inferencing, the ability to scale and deliver without any downtime, ease in deploying new models in the field and ability to support multiple platforms. To achieve fast performance, reference data should be stored in-memory. Redis AI with its in-memory capabilities and support for Arm-based platforms such as NVIDIA Jetson Nano have enabled numerous use cases for applications at the industrial and manufacturing edge.
For edge computing, network edge is a critical component with security and connectivity requirements that are increasingly being addressed by SD-WAN services. Customers can now cost-effectively deploy Fortinet FortiGate VNFs and SD-WAN services on Arm-based NXP Layerscape platforms.
As we continue to bring value for customers with our ISV ecosystem on Arm Neoverse platforms, we are focused on further enabling new software offerings, develop solutions and integrations that ensure customer deployments are successful.
We look forward to further strengthening our ISV ecosystem and delivering on the strategy outlined this week at Arm DevSummit. For a conversation regarding this, please reach out to us here.
[CTAToken URL = "https://developer.arm.com/solutions/infrastructure/ecosystem/software-ecosystem" target="_blank" text="Visit Arm Software Ecosystem Site" class ="green"]