The rising market share of electric vehicles and the reduction in combustion engines required in the future is transforming the automotive industry. One could think this would lead to a simplification in vehicle design, but with driver assistance technologies becoming a key differentiator, they are in fact becoming real "data centers on wheels." Furthermore, car drivers are used to a consumer-based entertainment experience, and this must be reflected in future cabin designs. Some high-end vehicles currently contain more than 100 million lines of code, with fully autonomous vehicles expected to reach half a billion by the end of the decade. As a comparison, a modern commercial airline plane contains “only” about 14 million lines of codes.
As with any kind of radical transformation, the transition to software defined vehicles comes with plenty of challenges. Fortunately, to address these challenges, the industry can rely upon one powerful concept that is helping to manage this exponential complexity: virtualization.
The first benefit that comes to mind when talking about virtualization in the automotive industry is the much-needed hardware consolidation. Indeed, moving to a centralized architecture with zonal controllers lowers costs and helps to lessen the impact of current chip shortage issues and supply chain dependencies. It also reduces wiring harness weight and complexity while enabling significant savings in development, testing and even toolchain investments.
Virtualization is integral to addressing the growing demands for in-vehicle customized features and capabilities. It creates a market-differentiating vehicle user experience, as virtualization facilitates innovation and creativity through an open and flexible environment.
Virtualization is also a key requirement for automotive cloudification, unlocking new use cases and enabling all the benefits of a cloud native environment. From a software practices perspective, it enables open-source models, continuous integration/continuous development (CI/CD), micro services, workloads containerization, and expedites over-the-air (OTA) updates. It also reduces developer friction, de-risks schedules, while improving time-to-market. Virtualization even enables edge computing offload through live migration.
Automotive vendors who can rapidly evolve their software will gain a decisive competitive advantage, as they can offer the best security guarantees. This is why Arm, alongside leading industry partners, is transforming the software-defined future of the industry through SOAFEE, a new automotive software architecture and open-source reference implementation. This industry-lead collaboration brings the real-time and safety needs of automotive together with the advantages of a cloud-native approach.
A consequence of this consolidation is that much-needed hardware accelerators like GPUs and peripheral interfaces, like ethernet, will have to be shared, while still maintaining the required levels of performance and isolation. In a virtualized environment, this is often achieved by having a specific virtual machine (VM) handle the sharing. However, this solution might have the following shortcomings:
One potential solution to this scenario is support for hardware virtualization, like in the Arm Mali-G78AE GPU, where each VM can directly access its own assigned share of the peripheral. This helps to achieve near bare-metal performance.
As we can see through the device virtualization and partitioning examples, which are supported on Arm Cortex-R52 and future automotive Armv8-R CPUs, different options must be weighted. Trade-offs must also be made to find the best solution for each use case.
Today’s whitepaper titled “Device Virtualization Principles for Real-time Systems” seeks to provide guidance to understand which of the different options are best-suited to virtualize devices built on Armv8-R based systems. As you can read in the whitepaper, not all solutions are suitable for a given use case. System architects must carefully choose which device sharing models should be supported in hardware. However, no matter which approach is chosen, there are principles explained in this whitepaper that should be followed to extend the privilege model to the device as the basis of safety and security. To accelerate the adoption of future automotive EE-architectures and reduce the cost of software integration onto shared electronic control units (ECUs), the industry must establish a well understood set of design patterns and best practices to address device virtualization.
Virtualization does not come without a cost. Maintaining safety and predictability while becoming more open and secure may prove challenging. The required software consolidation effort calls for changes in the current practices. At Arm, with our history in virtualization and cloud native environments, we provide guidance in the “Best Practices for Armv8-R Cortex-R52+ Software Consolidation” whitepaper published last September. However, the path towards software-defined vehicles is far from straightforward.
Indeed, the diversity of computing platforms and automotive software ecosystems impedes reuse and innovation. The whole industry is now contemplating the need to streamline software and interfaces, achieve global standardization, and even develop certification programs. The full potential of virtualization technology can only be exploited through nothing less than a new paradigm shift: further alignment on open standardization. As a first step, this whitepaper formalizes a set of generic requirements that we propose should be met by microcontrollers (MCUs) and System on Chips (SoCs), including Armv8-R CPUs. But open standards are built by communities, and Arm is happy to have discussions with partners on how to further standardize software for this compute architecture. The SOAFEE Hypervisor Tiger Team can be a good place to discuss next steps.
In a nutshell, virtualization is now at the heart of a revolution that is taking place in the domains served by the Armv8-R architecture. For example, the EL2 separation option in Cortex-R52+ represents a good option to enable intelligent integration of multiple software stacks. While the CPU architecture has evolved to provide such features enabling virtualization, on the device side, whether for hardware accelerators or I/O peripherals, implementing a proper isolation for safety and security while balancing performance and cost may prove challenging. Depending on use cases, multiple solutions with different software and hardware cost versus efficiency ratios will prove optimal. This new whitepaper “Device Virtualization Principles for Real-time Systems” discusses these approaches to provide guidance to system architects. This acts as a prerequisite for the proper standardization of device virtualization in real-time systems.
Click below to download this new whitepaper and join us on SOAFEE to discuss your feedback or send it to v8r-device-virt-feedback@arm.causewaynow.com
[CTAToken URL = "https://armkeil.blob.core.windows.net/developer/Files/pdf/white-paper/device-virtualization-whitepaper.pdf" target="_blank" text="Read now" class ="green"]