The automotive industry is currently looking at the technology innovation needed to move from today’s prototype autonomous vehicles to deployable safe, self-driving solutions. This technology must be able to tackle the key challenges that are currently preventing us from reaching and producing safe Level 4 and Level 5 autonomous vehicles. In this blog we will cover these key challenges and explore the solutions available to automakers that fit within their required timeframe.
We’re seeing the complexity of autonomous automotive systems growing at an unprecedented rate, and computational processing must keep pace with this growth without compromising the current challenges of power consumption, thermal properties, size and cost, safety and security. To add to these technology challenges, there are still many debates about consumer and regulatory acceptance to full autonomy. For example, a recent survey from the AAA showed seventy-three percent of American drivers are too afraid to ride in fully self-driving vehicles. Another social and technological challenge is the fact that it is very hard to have autonomous cars sharing the road with drivers like us (who will likely make the autonomous car’s life an algorithmic hell).
So, let’s explore the challenges that we need to consider for safe deployment of autonomous vehicles at scale.
It has been suggested that if produced in 2020, a Level 4 and Level 5 car could cost between an extra $75,000 to $100,000 compared to a regular car. Truthfully, this figure may even be too low as the total cost is likely to exceed $100,000 when considering the number of sensors needed to achieve Level 4 and Level 5 autonomy. To make the purchasing of these vehicles feasible, the price will need come down dramatically to make it affordable for consumers. It is likely that this high price will mean that the first real deployments of autonomous vehicles will be part of Mobility-as-a-Service (MaaS), ride-sharing or robotaxi fleets. By replacing the cost of a human driver, and by driving a much higher utilization of the vehicles than consumers, these entities could build a business model that can support these more expensive vehicles.
As shown in the diagram above, Level 3 is the first step in the move from ADAS to autonomy. However, there is currently some debate over Level 3 autonomy and the requirements put on both the vehicle and the driver. Successfully deployed Level 3 autonomy requires the driver to still be alert when the vehicle’s self-driving functions are active. This raises an interesting issue because we as drivers will instinctively assume that as soon as we take our hands off the wheel, we no longer need to pay attention, and can quite happily do our email, send texts etc, which takes both our eyes and our minds off the road. However, with Level 3, the car can ask you to take back control of the vehicle at any time. This raises the issue of how quickly a distracted driver can come back to the wheel and take back control of the vehicle to alleviate the situation that the autonomous car couldn’t handle. Some car manufacturers are currently discussing skipping Level 3 as a way to overcome this challenge. Also, from a liability perspective, skipping Level 3 would make it easier to determine if the driver is controlling the vehicle or if the vehicle is self-driving. There are also discussions about advanced driving monitoring systems using in-cabin cameras and advanced software algorithms to determine whether the driver is alert, and fit to take back control, and it not, activate the appropriate warning to bring the driver back to full readiness. Even if car manufacturers decide to skip this level, the technology complexity required to get from Level 3 to Level 4 is much greater.
This question was recently posed to a panel of automotive industry experts, and their answers can be viewed in our blog, delivering future autonomous systems.
The move from ADAS to autonomous demands a much greater awareness of everything around the car. In order to accomplish this, the number of sensors on the car are dramatically increasing, with multiple LiDAR, camera and radar sensors required to essentially replace and enhance human sight and situational awareness. Not only are these sensors expensive, but the processing required to understand what they are “seeing” and the situation evolving outside the car is dramatically different to the compute required by simpler ADAS functions like adaptive cruise control or emergency braking.
The majority of autonomous vehicles being prototyped right now are essentially testing the increased sensor complexity and the software algorithms needed to process the large amount of information coming into the car, make the right decision about what to do and then action it. This processing requires a substantial amount of software, with the current estimate we have being 1 billion lines of code to power a fully autonomous car. The compute requirements to execute this large amount of software are more akin to server performance than traditional automotive embedded processing. This is driving a trend towards a consolidation of much more powerful clusters of application processors and accelerators in more performant multicore SoCs rather than discrete CPUs. This consolidation requires a dramatic change in the software architectures, and can also cause a dramatic increase in the software footprint.
The software application complexity is far greater than even the most advanced passenger jets already brimming over with autonomous functionality, because autonomous cars will have to deal with highly chaotic roads full of unpredictable human drivers and pedestrians vs. the relatively empty skies full of professional pilots. This leads to a large amount of algorithmic processing that needs to happen in real time to understand everything that is happening around the car and then the huge software stack that is required for all of the autonomous compute components to make the right decisions and execute them safely. This greater complexity lends itself to a common and unified platform architecture on which to build an easily upgradable and portable software stack.
As stated earlier, recent statistics have shown that 73% of American drivers are too afraid to ride in fully self-driving vehicles and, astonishingly, 63% of US adults would feel less safe sharing the road with self-driving vehicles while walking or cycling. This raises a new and interesting challenge of how we gain consumer trust, both as a passenger in an autonomous car, and someone sharing the same environment with the car.
Safety is a key part of many automotive systems, and rigorous safety standards and certifications are applied to any functions that need to work reliably when requested by the driver, such as brakes, steering etc. When we increase autonomy in a car, we are essentially replacing the safe decision making of a human driver with a complex computer system comprising many heterogenous compute elements and, as discussed earlier, a billion lines of code. How can we guarantee that this hugely complicated compute system will execute to the highest levels of passenger and environmental safety?
With the consolidation of functions onto powerful multi-core SoCs, there will also be the need to support mixed-criticality applications on a single SoC. This is where some applications will require the highest levels of functional safety as they are executing life critical functions, mixed with applications operating at a lower criticality level. It would be impossible to try to take all the software to the highest level of functional safety, and so a compute and software architecture is needed to support these different safety levels without having to dedicate a separate SoC for each application.
The compute systems going into today’s autonomous prototypes are typically based on off-the-shelf server technology. The challenge with server technology is that the size, power consumption and thermal properties are not suitable for cars. There needs to be a significant reduction in all of these current attributes. The common belief is that the power consumption needs to reduce by 10x, the size by 5x, and if both of these can be achieved then there will be a significant reduction in cost and dissipated heat, which also leads to simpler and more reliable cooling methods. These improvements will lead to the true deployment of self-driving cars, both in the consumer space and robotaxis.
There is an increasing trend within the cabin where consumers want a more enhanced and enriched in-car experience. As we get to higher levels of autonomy, the occupants of vehicles will turn from drivers to passengers and their requirements for information, entertainment and connectivity will be more akin to their home and office.
Before we arrive at full autonomy, there will be an interesting hybrid of driver and environmental information being fused with entertainment and productivity features. This will pose the interesting challenge of mixing safety into the existing feeds, and ensuring that the driver safety information is not compromised by the other forms of information being displayed.
As we move to the next 5-7 years, to a more autonomous world, there will be different kinds of information delivered to the screens including driver information from autonomous systems, media experiences, driver monitoring systems, sensors facing inside the car, all of which will be helping to deliver a more personalized in-car experience. This will require high throughput capability for delivering data to screens, high bandwidth connectivity, and enhanced safety, especially for critical information such as driver warning information.
Automotive OEMs and Tier 1s increasingly recognise the need for a strong technology partner to help them address these challenges and view Arm and its broad automotive ecosystem as the right partnership to make this happen.
Arm has been working closely with the automotive industry to understand each of the challenges outlined above and is now providing new solutions that will help power the production of fully autonomous vehicles at scale.
The unparalleled range of Arm CPUs and other IP elements such as GPUs, ISPs and NPUs allows Arm-based solutions to be used throughout the whole vehicle, with the broadest set of automotive-grade SoCs being offered by Arm’s semiconductor partners. This range of application processors (Cortex-A), real-time processors (Cortex-R) and small, low-power, micro-processors (Cortex-M) fits across all the phases of an autonomous system, as shown above. As Arm’s semiconductor partners bring more of these compute elements onto single heterogeneous SoC platforms, this will help to meet the processing requirements, and at the same time help reduce the power, price, size and thermal properties.
The Arm Safety Ready program encompasses Arm’s existing and future products that have been through a rigorous functional safety process, including systematic flows and development in support of ISO 26262 and IEC 61508 standards. Safety Ready is a one-stop shop for software, tools, components, certifications and standards which will simplify and reduce the cost of integrating functional safety for Arm partners. By taking advantage of the program offerings, partners and car makers can be confident their SoCs and systems incorporate the highest levels of functional safety required for autonomous applications. Read our recent blog to find out more about the Arm Safety Ready program.
Arm has also taken another huge step forward in meeting the innovation needs by adding automotive enhanced features to some of its key technologies. One of these features is Split-Lock which enables clusters of processors to be either split for performance or locked together for higher levels of safety.
First introduced in the Cortex-R52 real-time processor, this feature has now been brought into two of Arm’s application processors targeting the high-performance safety requirements for autonomous sensing and perception processing. While Split-Lock is not new to the industry, Arm is the first to introduce it to processors uniquely designed for high performance automotive applications such as autonomous drive.
Split-Lock delivers:
This innovation will enable Arm’s semiconductor partners to build safety-capable SoCs that can be configured at deployment to have mixed-criticality elements on a single SoC, and also use the same SoC for multiple applications in a vehicle. This will greatly help with the deployment of mixed-criticality systems, such as an ECU running both ADAS and IVI functionality and also help keep the costs down as single parts can be used across a wide set of automotive applications.
For more information on Split-Lock, read our blog: Evolving safety systems: Comparing Lock-Step, redundant execution and Split-Lock technologies.
The first Automotive Enhanced (AE) application processor was the Cortex-A76AE which was launched in September 2018. This processor offers the performance required for autonomous compute, but at a much lower power level than traditional server chips. Coupled with the Split-Lock capability, this processor will allow for truly deployable safe autonomous compute.
The latest Automotive Enhanced application processor, the Cortex-A65AE, is aimed at safe sensor and information processing and was announced in December 2018. The Cortex-A65AE is Arm’s first multithreaded processor and the industry’s first with integrated safety that has been designed for high throughput applications such as sensor processing in autonomous vehicles. Its multithreaded capability is layered on top of Split-Lock, allowing you to configure and prioritize safety or performance, and it delivers 3.5x higher throughput than prior generations with better power efficiency. Historically, a separate processor would be needed for each sensor stream, but now with the Cortex-A65AE two streams of sensor information can be processed per core, improving efficiency and latency.
Both of these new application processors can be coupled with each other, and with other compute elements to complete a truly deployable autonomous computing complex, all based on a common architecture. This will help automakers build their ideal compute topography and optimize their software stack across these elements without having to change tools or architectures, and as the safety locking of cores is now being done in hardware, no software changes are required to accommodate it.
Arm has always relied on a broad and healthy ecosystem to bring key technologies to market, and this is particularly true in the automotive industry where multi-vendor ecosystem support for both hardware and software is a key factor for deployment for both OEMs and Tier 1s.
Of the top 20 semiconductor suppliers to the automotive industry, 15 are Arm licensees. The breadth of Arm’s support allows ecosystem partners to build SoCs for all parts of the car, from powertrain and body through to cabin and connectivity and finally in the move from ADAS to autonomous systems.
However, as noted previously, one of the broadest challenges for deployment of new complex automotive innovation is the software that enables it. No company can write a billion lines of code to win the race to full autonomy, and hence automakers are relying on a software ecosystem to provide them with the building blocks to get there. Those building blocks can be based on open source, but for much of the real-time and safety-critical software stack, commercial offerings are preferred for their safety pedigree.
Arm has undertaken to support both our open-source communities and commercial software entities to make a broad range of software solutions available across all the vehicle systems, optimized for the Arm architecture. The is a place where these companies can thrive and make their products easily available to the automotive industry.
As the autonomous software stack grows, there is a new collection of software companies focusing on providing solutions for different applications running on Arm architectures. At this year’s CES there were a number of key partners showing demos of their different software solutions running on Arm-based hardware. Everything from deep-learning neural networks for perception processing (See how an Arm partner tackles this issue) through to HD mapping applications GPS localization stacks for accurate autonomous positioning. Read our recent CES blog to find out the latest automotive innovations that are powered by Arm.
Arm has spent a considerable amount of time working with the automotive industry to fully understand the challenges and pain points standing in the way of deploying the next wave of automotive innovation. The recently announced technology innovations from Arm will help make those deployments a reality, and will cut power, size and cost without compromising performance and safety. Arm is working with key OEMs, Tier 1s and the broader ecosystem to help simplify and accelerate the path to real deployment of autonomous vehicles which will redefine our concept of mobility and enable a new era of automotive innovation.
[CTAToken URL = "https://www.arm.com/solutions/automotive" target="_blank" text="View Arm’s Automotive Solutions" class ="green"]
awesome
sir can you provide us a tutorial of these things
hope for your reply
thank you