With Arm’s vast microprocessor ecosystem at its foundation, the world is entering a new era of pervasive intelligence: Tiny Machine Learning. Professor Vijay Janapa Reddi, a pioneering machine learning systems engineer at Harvard University, walks us through the breakthroughs shaping this emerging field and explains in an interview why the future is not Big Data, it’s tiny…
“Today, there are billions of low-power widgets out in the world gathering data, in everything from refrigerators to agricultural tools. Engineers have unlocked the ability to use them as a giant network for machine learning --sitting right where the digital and analog worlds meet. This meeting place is what is now commonly referred to as Tiny Machine Learning (TinyML). TinyML devices can run data analytics, at extremely low power – sparking a change that is set to be bigger than the arrival of the internet.
You may still hear a lot of talk about Big Data. But Big Data is old data. It takes a long time to generate. It must then be collected, cleaned and processed before entering the machine learning training pipeline. There is a huge gap before it starts producing value.
TinyML relies on real-time data and enables ubiquitous intelligence. It promises reduced latency and energy consumption when compared with traditional machine learning systems, as well as more robust data security – all while needing little to no internet connectivity to run. TinyML can already be seen at work in everything from improving customer service to monitoring the health status of wind turbines. But that’s just the beginning.
Thanks to Arm-based microcontrollers, the ecosystem was already out there. The Arm Cortex-M processor is everywhere, forming a bedrock of embedded systems that is both low-cost and highly efficient."
People say a picture is worth 1,000 words. When you run an inference on a picture, TinyML will soon use contextual reasoning to figure out the relationships between the objects within it. It will tell you much more about what's going on, whether that’s someone’s state of mind and feelings at that moment, or what they value. Forget a picture being worth 1,000 words; a picture will soon be worth an entire novel, thanks to on-device tiny machine learning.
“This is a whole new world of systems engineering. It’s like building the plane at the same time as trying to fly it.”
TinyML’s big breakthrough hinges on the fact that the world didn’t need to wait for the arrival of a ground-breaking new supercomputer to achieve it. Thanks to Arm-based microcontrollers, the ecosystem was already out there. The Arm Cortex-M processor is everywhere, forming a bedrock of embedded systems that’s both low-cost and highly efficient.
Five or six years ago, I became very interested in the possibility of embedding ML onto small devices. Several others were thinking the same thing. The question was how to build functioning intelligence on top of that microprocessor ecosystem. But all that has changed. Since then, as a community, we have built datasets, machine-learning models, and frameworks that enable the first level of deployment on Arm-based microcontrollers.
I'm a big fan of team science. As academics, we tend to do a good job of shining light on new areas. But after that, it’s about inviting private companies to do what they do best, to collaborate, and to enable the adoption and commercialization of technology ideas. We’ve been working with other companies to help them build platforms that scale these ML systems across a range of different devices. There were lots of technical problems achieving the rich heterogeneity that we needed. When you achieve extreme efficiency, you tend to lose flexibility. But as the ecosystem has matured, we’ve been able to tackle that challenge.
“Arm is now taking things even further. Cortex M-Class processors have enabled this ML leap... Now it’s about making it run even faster. Arm has high-performance ML accelerators, for example the Ethos-U55. And we’ve been able to access both Cortex-M and Ethos machine learning processors from Arm.”
Thanks to the TinyML developments, AI is going to become ubiquitous. We will live in a world that’s constantly processing data, where embedded intelligence is working proactively with you, not just for you.
I've always been drawn to the areas in which technology interfaces with humans. And I always ask whether I bring something new that can help steer the science in the right direction. Everybody should be contributing something that they're uniquely capable of doing. After all, we are all unique and different and that's how I value our societal roles.
Sustainability is a great example. Of course, we need to look at reducing the amount of energy that compute is using, to lessen its impact. But I’ve been looking at it another way: how can we use computing to aid sustainability? Take food waste. Most of the time we throw away food because we simply don’t know for sure if food has gone bad. So, we end up throwing a lot out. Imagine instead if we could print a biodegradable TinyML processor on the flexible film of food packaging. We could use limited energy harvesting mechanisms to detect, from the chemical processes in the food, whether it’s still good to eat. This requires us to innovate at a new level, such as developing new sustainable fabrication methodology and lowering manufacturing costs to sub-cent prices to make bespoke solutions cost-effective. We'd need to develop new tools and solutions that can enable ML systems engineering at a new level.
I find that very exciting.
“We also need a lot more people working in ML systems engineering… Professionals who can bridge the gap between theoretical ML concepts and the practical engineering involved.”
Arm is now taking things even further. Cortex M-Class processors have enabled this ML leap, and people can see the potential. Now it’s about making it run even faster. Arm has high-performance ML accelerators, for example the Ethos-U55. Thanks to Arm’s innovation, they are accessible to everyone. And we’ve been able to access both Cortex-M and Ethos machine learning processors including U55 from Arm.
There are some potential challenges to work out. First, there is the data, the fuel that powers the rocket ship. Once the AI is interacting with the analog physical world, it needs the right interface through which it can understand it. That’s not something we’ve figured out for this new world yet where large language models (LLMs) like ChatGPT meet TinyML devices.
To usher in the new era of ubiquitous intelligence, where AI is increasingly embodied everywhere, we need a lot more people working in ML systems engineering. We simply don’t have the scale of the workforce to support the needs of society. There’s a critical need for professionals who can bridge the gap between theoretical ML concepts and the practical engineering involved. They have to understand the entire lifecycle of ML systems – from data engineering and model training to deployment, optimization and maintenance.
ML systems engineering needs to be a distinct discipline. People do not seem to realize just how big, nor how revolutionary, this work is. If ML algorithm developers are like astronauts, ML engineers are like rocket scientists and mission control specialists, without whom, there would be no exciting journey to explore the vast unknowns of space.
At the moment, ML models are getting lots of attention. Models are also already very capable of writing their own code. But the machines are still not so good at putting the nuts and bolts together. They have not made it to the embodied aspect.
This is a whole new world of systems engineering. It’s like building the plane at the same time as trying to fly it. When you program with machine learning, you do not write code. You have to understand data and how to program with it. That’s a big problem, because you don't control the data. It's very much like being a parent: the best you can do is let your child out into the world, hoping that you've taught them the right things and exposed them to the right influences. And you have to accept that it’s not going to be perfect.
Coding machine learning systems is similarly subjective. But that’s also where the creativity and potential lies.
ML systems engineering needs to be a distinct discipline. People do not seem to realize just how big, nor how revolutionary, this work is. If ML algorithm developers are like astronauts, ML engineers are like rocket scientists and mission control specialists, without whom, there would be no exciting journey to explore the vast unknowns of space. Anyone shifting to ML systems from traditional code will be entering a crazy realm and speaking a whole new language. So, education, outreach, and upskilling are all critical elements. And to make this technology pervasive, it needs to be understood, engineered, and put into production. Only then can we unlock the full potential of technologies like TinyML and enable ubiquitous intelligence.”
Collated ML References: https://mlsysbook.ai/references.html
Tiny Machine Learning Open Education Initiative (TinyMLedu). Available from: tinyml.seas.harvard.edu
To find out more about the IP available including Arm Academic Access, please visit our website.
Explore Curated Resources from Arm Education