The automotive industry is committed to a software defined future - solving current challenges and opening opportunities for new revenue streams, efficiencies and closer lifetime relationships with consumers. Cloud-native is presented as the design paradigm to get us there, but what does that mean? And can cloud-native really adapt to the realities of cars and their passengers?
My colleague Robert Day has recently explored the promise of a transformation in how vehicles are designed, produced and maintained in his piece ‘The Software Defined Vehicle Needs Hardware that Goes the Distance.’ The potential of a car that gets better as time goes on; where performance, safety and comfort can be fixed, upgraded and improved through software updates; is magnificent. What’s more, this approach is necessary to meet the realities of consumer expectations, disruptive market entrants and external pressures such as the drive towards electrification. To meet these emerging requirements, the complexity of software in the car is increasing exponentially and this drives the use of an agile service oriented approach. Strong rigor is also required in developing a safety-relevant software for a car.
Where can we find such a template for the wholesale transformation of how an industry designs and builds both hardware and software? Fortunately for the aspiring software defined vehicle architect, such a journey has been occurring in the cloud compute and mobility segments. "Cloud-native" has become the tagline for agile innovation at massive scale in both of these segments. But the question remains as to how can cloud-native practices be applied to something participating so physically in the real world as a car while maintaining safety which is of paramount importance?
With Arm being a key technology provider to the OEMs creating the next generation software defined vehicles with its automotive enabled IPs, it is paramount for Arm to engage with its ecosystem across the automotive value chain in identifying challenges in using “Cloud-native” design paradigm in automotive and help address it while enabling its technologies across the cloud to automotive edge infrastructure.
In this blog, we touch upon a few aspects of “cloud-native” in the context of automotive and its impact on automotive system architecture.
The Cloud-native Computing Foundation (CNCF) defines cloud-native as:
“Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.”
That’s a lot of jargon for someone not in the business. But this vocabulary underpins an ecosystem of massive value and scale. The infrastructure that has evolved here makes possible the dream of writing world-changing code over a coffee in one’s local café and instantaneously seeing that code tested, packaged, and pushed globally.
At the core of the cloud-native philosophy is Service Oriented Architecture (SOA). Here, applications are enabled as self-contained functional services that can be deployed and orchestrated on location agnostic compute systems. This concept can be extended to what are called "microservices". One popular definition is “an approach to designing software as a suite of small services in its own process and communicating with lightweight mechanisms” (https://martinfowler.com/articles/microservices.html). In practical terms, microservices can be considered as splitting each of the functions into the smallest logical independent services.
Figure 1 Virtualized/Containerized Workload Deployment
These services are typically deployed in containers or as light weight Virtual Machines (VMs) depending upon the application architecture (Figure 1). This allows multiple services to be run on the same machine while maintaining isolation from each other. Open standards have evolved in the data center domain, to allow a such services to be self-contained, interoperable, location transparent, and hardware agnostic.
There are several benefits to SOA:
One key piece of the puzzle is the container orchestrator. This is a piece of software that automates the process of container management, deployment, scaling across the entire system, and lifecycle management. One of the most popular container orchestrators is Kubernetes, now the de facto standard in the container orchestration space.
Having looked at some of the base cloud-native technology enablers, the next step is to put some of these together in an automotive context.
Figure 2 Cloud-native trend across the automotive stacks.
As of today, there are real examples of SOA based automotive stacks. Many of these are prototypes behind closed doors, but a good public open source example of such a deployment can be seen in Autoware, which is an open source Autonomous stack (https://www.autoware.org/post/scalable-autoware-auto-through-k8s) maintained by Autoware Foundation. Arm is actively working with the Autoware Foundation (AWF) and embracing Autoware as one of the reference automotive workload for future systems to demonstrate cloud-native devops. The general cloud-native principles applied with Autoware should scale to other segments in Automotive like Digital Cockpit.
This represents a major shift in the way automotive applications can be developed, as the mature cloud-native ecosystem can start to be tapped into for automotive software development, with potentially immense implications in agility, productivity, and time to market.
Automotive computing has certain special requirements that the cloud or mobile computing does not have. In particular, there is a requirement for real-time control and functional safety in some, but not all systems. This is referred to as mixed criticality. The end goal is to be able to use the existing cloud-native infrastructure to both develop and deploy microservices which are fully aware of and functional in the mixed criticality environment of automotive computing. This is where a gap today exists, and work will need to be done to extend existing cloud-native infrastructure to support DevOps and deployment of mixed critical workloads.
The previous section explored the definition of cloud-native with automotive context and some of the existing technology enablers which have matured in the data centre realm and are paving the way to the software defined vehicle. However, the paradigm shift in the digitization of the car is not limited to software – there is a major impact on the next generations of automotive compute architecture as shown below in figure 3.
Figure 3 Automotive E/E Architecture Trend
Most production cars today have multiple, small, fixed-function Electronic Control Units (ECUs), often designed by many different vendors. Implicit in such an architecture, other than there being multiple points of failure, is that there is not much flexibility in terms of functionality upgrades.
Due to the availability of powerful heterogenous system-on-chip based computing, the concept of workload consolidation is making its way into automotive. Multiple ECUs are being consolidated into domain controllers, which handle a specific domain functionality of the car. The typical domain controllers seen today are in-vehicle-infotainment (IVI), digital cockpit, ADAS/AD and power, chassis and body. This architecture reduces connection complexity while increasing compute cohesion.
With even more capable heterogenous, purpose-built SoCs on horizon, there is a trend towards further consolidation wherein high-performance, embedded ruggedized bladed servers will replace the domain controllers, paving the way for zonal architectures. Here, sensors are terminated at low-compute, low-power and real-time zonal controllers which will do edge pre-processing, before forwarding the data to a high-performance central computer for heavy processing.
Figure 4 Centralized Compute Cluster - "Datacenter-On-Wheel"
Figure 4 shows a conceptual high-performance embedded computing (HPEC) that would be fitted in a car. As can be seen, although the architecture looks like a typical data centre server, there are additional constraints that the in-car HPEC needs to consider when it comes to being able to run heterogenous safety and real-time related workloads – not to mention the physical constraints of shock/vibration and thermal aspects that need to be considered within a car. Several automotive OEMs are already moving along this evolutionary path, whilst those exploring more radical use cases, such as robotaxi or autonomous delivery vehicles, are already deploying the centralized compute architecture.
When we consider the above shift towards centralized compute architectures and deployment of microservices in such architectures, while a lot of existing cloud computing orchestration design principles apply and can be leveraged, things get a lot more complex due to the need to maintain mixed criticality of the services deployed along with matching a service to right heterogenous hardware which is lot more diverse than in cloud or mobility segments.
This is only the beginning of a software-defined world that is emerging across automotive. This approach will fundamentally affect the way automotive software will be developed, integrated, tested and deployed in the future. Additionally, the broad digitization of the modern vehicle is rapidly evolving the compute architecture deployed within vehicles. Cloud-native design patterns hold promise of a scalable approach to managing increasing complexity within automotive computing – but there are very specific challenges around mixed criticality and scalability to be addressed. In the future blogs, we will dive deeper into some of these challenges along with the initiatives that Arm and its ecosystem are driving.
Get your pass for Arm DevSummit 2021 to learn how to get started with cloud-native automotive development here