Imagine you’re 30 meters down, diving above a reef surrounded by amazing-looking creatures and wondering what species the little yellow fish with the silver stripes is. You could fumble around for a fish chart, if you have one, but what you really want is an easier and faster solution. Fast forward to 2019, and technology has provided. Now your waterproof smartphone is enabled by Arm Machine Learning (ML) and Object Detection processors. Your experience is very different.
Your dive mask is relaying information in real-time via a vivid heads-up display. An Arm-based chip inside your smartphone is now equipped with an advanced Object Detection processor that is filtering out the most important scene data while an operating system tasks a powerful Machine Learning processor with detailed identification of fish, other areas of interest and hazards. The information you’re receiving is intelligently filtered, so you’re not overwhelmed with data. This is exactly what Arm’s Project Trillium and our new ML technologies will enable and much, much more.
We are launching Project Trillium to kickstart a new wave of invention in the world of artificial intelligence (AI), of which ML is a key part. Getting to this point is the result of significant and prolonged investment from Arm to enable the kind of future devices we and our partners see on the horizon. As we see edge ML introduced rapidly into more products, we expect to see a world in which most ‘things’ are equipped with a new level of smartness. Indeed, my answer to the question: ‘Why would you introduce more intelligence into your device?’ is ‘Why wouldn’t you?’.
In my opinion, the growth of ML represents the biggest inflection point in computing for more than a generation. It will have a massive effect on just about every segment I can think of. People ask me which segments will be affected by ML, and I respond that I can’t think of one that won’t be. Moreover, it will be done at the edge wherever possible, and I say this because I have the laws of physics, the laws of economics and many laws of the land on my side. The world doesn't have the bandwidth to cope with real-time analysis of all the video being shot today, and the power and cost of transmitting that data to be processed in the cloud is simply prohibitive.
Google realized that if every Android device in the world performed three minutes of voice recognition each day, the company would need twice as much computing power to cope. The world’s largest computing infrastructure, in other words, would have to double in size. Also, demands for seamless user experiences mean people won’t accept the latency (delay) inherent in performing ML processing in the cloud. And, to be reliable, ML cannot be dependent on a stable Internet connection, especially when it is governing safety-critical operations.
In addition to the technical logic, laws and user expectations on privacy and security mean that most people prefer to keep their data on their device. That is backed up by the findings of the AI Today, AI Tomorrow report we sponsored in 2017. Project Trillium will make that possible.
Project Trillium represents a suite of Arm products that gives device-makers all the hardware and software choices they need. It also enables a seamless link into a bank of Arm partners delivering neural network (NN) apps including leading frameworks such as Google TensorFlow, Caffe, Android NN API and MXNet.
The architecture behind the Arm ML processor is purpose-built to be as efficient as possible, and it is completely scalable. It enables the processor, in its launch form, to run almost five trillion operations per second (TOPs) within a mobile power budget of just 1-2 Watts, making it equal to the most challenging daily ML tasks. That performance can go even higher in real-world use. This means devices using the Arm ML processor will be able to perform ML independent of the cloud. That’s clearly vital for products such as dive masks but also important for any device, such as an autonomous vehicles, that cannot rely on a stable internet connection.
Today, the technologies within Project Trillium are optimized for the mobile market and smart IP cameras, as that is where edge ML performance is being demanded by device-makers. But as plans to deploy ML across a diverse range of mainstream markets mature, Arm ML technologies will scale to suit requirements.
We already see devices running ML tasks on Arm-powered devices in products such as smart speakers featuring keyword spotting. This will continue and expand rapidly. At the high end, there is ML inference (analyzing data using a trained model) being performed in connected cars and servers, and we have an ability to scale our technologies to suit those applications too. We now have an ML processor architecture that is versatile enough to scale to any device, so it is more about giving markets what they need, when they need it. This gives us, and our ecosystem partners, the speed and agility to react to any opportunity.
As well as the Arm ML processor, we also have its cousin the Arm Object Detection (OD) processor. It is a second-generation device, with the first generation computer vision processor already deployed in Hive security cameras. The OD processor can detect objects from a size of 50x60 pixels upwards and process Full HD at 60 frames per second in real time. It can also detect an almost unlimited number of objects per frame, so dealing with the busiest coral reef, or soccer stadium, is no problem.
Project Trillium is all about scalability and versatility, offering a range of performance options based on the compute world’s most widely-deployed advanced technologies . For example, some ML applications will not need specialized ML hardware and will run ML on ultra-low-power microprocessors such as the Arm Cortex-M family. Indeed, ML inference is already performed by Cortex-M processors on millions of IoT devices today. Project Trillium helps here too, as it offers an immediate upgrade for ultra-low power devices through highly-optimized CMSIS-NN software that is designed to boost processor performance.
In addition, the new ML IP suite includes a Compute Library providing optimizations for higher end ML applications running on Arm Cortex-A CPUs and Arm Mali GPUs.
So, whether developers want to use Arm Cortex-A and Cortex-M CPUs and/or Arm Mali GPUs, and any combination of the Arm ML and Objection Detection processors, Project Trillium is Arm’s fullest answer to any market opportunity yet. I am personally immensely proud to have been part of the team delivering it.
For more technical information on these products click the button below:
In short, Project Trillium will be the backbone of a world where ML does not signal a category of device, but a technology functionality found in almost all devices. Whether they are presenting information to you in real time in amazing new products like smart dive masks, or giving you voice control of your home, office or car.
We are launching Project Trillium two weeks before Mobile World Congress 2018 kicks off. This year’s event halls will feature early implementations in the form of the Arm Object Detection processor in products such as IP security and smart cameras. Future shows will see the fullest range of Arm ML technologies gaining traction far wider and supporting the growth of smart connected devices as part of a new world built on AI.
Note: Project Trillium is not the commercial brand name for Arm’s ML technology. The codename will be replaced by the commercial brand name in due course. Arm does not accept any responsibility or liability for any third party use of its codenames.