As the speedy growth of AI technology has been applied to mobile and edge devices, Arm has been playing a key role in the AI domain with significant technology influence. Breakthroughs in mobile processors’ performance have been continuously made by Arm, enabling more deep-learning algorithms to be deployed on mobile devices by engineers.
Baidu is a leading AI company with strong Internet foundation in China. Its open-source platform PaddlePaddle integrates multi-level components to create an efficient, flexible, and scalable deep learning platform. Among its rich products, Paddle Lite is the industry leading high-performance inference engine for endpoints. Paddle-Lite has been continuously developed to improve support capabilities of Arm-based platform.
Paddle and Arm have the shared vision of a mobile hardware ecosystem, and therefore, the two parties have been in a long-term collaboration. In the past few months, Arm Compute Library(ACL) team has deeply engaged with Paddle’s core R&D team to facilitate the improvement of overall performance on Arm Cortex-A CPU and Mali GPU on mobile and edge devices. The collaboration aims to enable better user experience when they use Arm-based hardware as back-end inference engine. Based on the instruction set characteristics of different Arm architectures, the technical exchanges and collaborations cover multiple scenarios of computing and memory access optimization. Combined with the analysis of some key operators in Paddle-Lite and Arm ACL team's experience, Paddle’s RD team has made the optimization for operator implementation based on multiple dimensions. The optimization includes but not limited to the following aspects:
Uses cases for Arm Cortex-A CPU:
Uses cases for Arm Mali GPU:
Through the previous optimization methods and some general and non-general optimization methods, the Paddle Lite model running on Cortex-A CPU and Mail GPU obtains a very considerable performance improvement. Meanwhile, the accuracy of some models has also been increased. We have measured the performance comparison data of the operator and the model of Paddle Lite before and after optimization in various data dimensions.
With Cortex CPU:
Figure 1: Operators' performance improvement on Armv8
Figure 2: Operators' performance improvement on Armv7
Figure 3: Performance of typical model on Armv8
Figure 4: Performance of typical model on Armv7
In regards with Mali GPU-based devices, we also initiated the similar testing with the following results.
Figure 5: Model performance improvement of Mali-G76(OpenCL) in mate30(990)
Figure 6: Model performance improvement of Mali-T860 (OpenCL) in rk3399
The paddle team is greatly benefited through this collaboration. Paddle-Lite, the mobile inference engine of Paddle, plays a key role in platform supporting inference tasks in Baidu’s mobile applications. After the optimization, it shows an amazing performance improvement in many commercial applications. Take some general visual inspection models (such as long pressing recognization) in mobile phone applications as an example, after this optimization, the model obtained a 22% performance acceleration and 3.4% accuracy improvement. This optimizes the user experience in the mobile application at a huge level. With Paddle Lite’s increasing enhancement of operator amount, running efficiency, and so on, it is possible to realize the deployment of more complex structures and higher performance algorithms and models in mobile devices.
With the rapid development of AI today, Arm and Baidu will look forward to a continued collaboration in shaping the future of AI.
[CTAToken URL = "https://github.com/ARM-software/ComputeLibrary" target="_blank" text="Learn more about Arm Compute Library" class ="green"][CTAToken URL = "https://github.com/PaddlePaddle/Paddle-Lite" target="_blank" text="Learn more about Paddle Lite" class ="green"]
Thanks for sharing!