There’s no denying that Embedded World (EW) is a whirlwind – 1000 exhibits, 35,000 visitors and over 2,000 industry participants – but now that it’s all over and the dust has settled, I wanted to take a moment to reflect on its impact, and consider the ripple effect of that impact going forward.
This year, the event focused heavily on the trend of embedded intelligence. All the major players – from silicon providers to software developers – had a presence, and Arm was no exception … but what really hit me as I walked around the vast exhibition space was the scope of Arm’s influence: almost every business seemed to be using Arm technology in some way to drive their products and make a splash in the embedded marketplace. It was pretty inspiring to see!
What was equally impressive was the range of sectors these products touched: from smart retail to industrial automation via drones and automotive, there seemed no limit to the extent of machine intelligence powered by Arm.
As a dedicated consumer of Italian food, I was intrigued by one example of industrial automation: AuZone’s pasta classifier – using object detection and classification running on an Arm Cortex-A-based platform with a TensorFlow backend – to separate farfalle from penne, rotini and macaroni on a moving conveyor belt. Typically, this kind of solution uses a cloud connection, but this was running wholly at the edge. (Sadly, there were no related dishes to sample).
AuZone’s pasta classifier – object detection and classification running on an Arm Cortex-A-based platform
Walking round the event, it was impossible not to notice the increasing amount of on-device machine learning (ML); the pasta classifier was by no means the only example. There were several ‘smart factory’ demos showcasing mock assembly lines with low-cost cameras for defect monitoring and quality control, moving intelligence to the endpoint to make defect recognition instantaneous.
Predictive maintenance is a task that’s also well-suited to edge ML, as Renesas demonstrated with their fault detection solution, which uses an accelerometer to detect anomalies in a motor’s current, torque and rotation speed. Other demos showed vision-focused smart retail use cases, such as in-store image-based checkout of your basket and digital signage with a camera that detects gender, age and mood, to predict which advertisement will have the most impact on its audience.
(Incidentally, if vision's your field of interest, you could check out our whitepaper, Adding Intelligent Vision to Your Next Embedded Product.)
Renesas's fault detection solution
There’s not much that gets me more excited than pasta, but DroNet – a convolutional neural network that can safely drive a drone in the streets of a city – came pretty close. The model learns to navigate by imitating manned cars and bicycles, which (hopefully) already follow the traffic rules. And, say the folks at DroNet, “for each single input image, two outputs: a steering angle, to keep the drone navigating while avoiding obstacles, and a collision probability, to let the UAV recognize dangerous situations and promptly react to them.” Which all sounds pretty cool.
Embedded vision was absolutely EVERYWHERE at EW, as were voice-controlled front-ends for consumer devices. Vision and voice are key drivers for the home market – and it’s a sector that looks set to grow and grow as consumers get wise to the convenience of smart gadgets, particularly those offering an intuitive user experience, seamlessly integrated with our lifestyles.
As testament to the fact that we’re all getting used to barking commands at inanimate objects, I spotted a number of products – such as the NXP i.MX-RT106A crossover processor – running keyword spotting on an Arm Cortex-M7. Offline command recognition and sound classification are big news for edge devices: there are significant benefits to be gained from on-device processing, not least in terms of latency and security. And if you can do all that on a tiny little Cortex-M, well … so much the better.
Dylan and a colleague have fun with gesture recognition
Not only was there more on-device learning in evidence, there was also an increasing migration of functionality from the cloud to the edge device – moving from simple keyword spotting, for example, to automatic speech recognition, enabling devices to recognize a wider range of commands, from, "Turn up the volume" to, "Set the temperature to 68 degrees".
Perhaps as reinforcement of this move to the edge, there was a number of companies – Arm included – showing platforms that reduce the barriers to developing edge AI. (At this point, it behooves me to give a shout out to Arm NN. In case you’re not familiar with it, it’s an open-source, common software framework that bridges the gap between the NN frameworks edge developers want to use with the underlying processors on their platform. If you’re interested in giving it a try, you can find Arm NN ‘how to’ guides on our ML Developer Community site.)
At an event dedicated to embedded processing, a session about Machine Learning on Arm Cortex-M was bound to go down a storm, and Arm’s very own Naveen Suda took to the stage to discuss methods for NN architecture exploration using image classification on a CIFAR-10 dataset and the development of models for constrained devices. (If that sounds like your sort of thing, you can find out more by watching the related on-demand webinar or downloading the ML on Arm Cortex-M Microcontroller white paper).
In conclusion, based on the buzz at the event, I anticipate a continued increase in the number of use cases for device-based ML, particularly around the ‘three Vs’ of voice, vision and vibration. ML is no longer the preserve of high-end, or cloud-based, applications; embedded software developers already have the tools they need to start making their systems smarter. Aside from a renewed love of pasta, Embedded World gave me an exciting insight into the trajectory of embedded ML – and it’s most definitely edge-bound.
Dylan presents the Cortex-M-based image classification game