Implementing AI into endpoints from the cloud is the hot technology trend right now. This is because endpoints, such as IoT devices and robots, are required to be ever smarter and react in real time. The AI required for these endpoints uses inference processing based on deep learning that replaces human perception such as vision and hearing.
To implement AI in endpoints, two major challenges need to be overcome: First, power consumption limitations, and second, flexibility. While the cloud parts can be equipped with sufficient power and cooling, endpoints are strictly required to limit power consumption which can cause shorter runtimes, generate heat or increase costs. The idea behind such power saving is to utilize dedicated hardware that is specialized for specific AI processing. However, this hardware will soon become obsolete since AI models are evolving day by day. Therefore, AI acceleration in endpoints is required to provide the flexibility to support newly developed AI models.
Renesas has developed the DRP-AI (Dynamically Reconfigurable Processor for AI) as an AI accelerator with high-speed inference processing. The DRP-AI achieves the low power and flexibility required by endpoints based on the reconfigurable processor technology it has cultivated over many years. Come and learn more at Arm Devsummit by joining our session, Advancing Interactive Intelligence on 19 October.
DRP-AI is composed of AI-MAC and DRP, which can efficiently process operations in convolutional and all-combining layers by optimizing data flow with internal switches. The DRP can process complex functionality, such as image preprocessing and AI model pooling layers, flexibly and quickly by dynamically changing the hardware configuration. Renesas also offers the "DRP-AI translator", a development tool for DRP-AI that automatically allocates each process of the AI model to the AI-MAC and DRP. This allows the user to easily use DRP-AI without being aware of the hardware.
This DRP-AI technology is equipped on RZ/V series (RZ/V2M and RZ/V2L), with a dual Arm Cortex-A53 based central processing unit. Its excellent power efficiency eliminates the need for heat dissipation measures such as heat sinks or cooling fans. Thus, RZ/V series is able to reduce product size and BOM cost, and it accelerates AI product time-to-market.
Register interest for Arm DevSummit 2022