Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
AI blog Why Google’s TF Lite Micro Makes ML on Arm Even Easier
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • Helium
  • Neural Network
  • Machine Learning (ML)
  • Partner Product
  • Mbed
  • Cortex-M
  • Arm NN
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Why Google’s TF Lite Micro Makes ML on Arm Even Easier

Hellen Norman
Hellen Norman
May 9, 2019
3 minute read time.

Yesterday, at Google I/O, Google announced that they are partnering with Arm to develop TensorFlow Lite Micro and that uTensor – an inference library based on Arm Mbed and TensorFlow –  is becoming part of this new project. (See the Mbed blog for more details.)

ML developers will likely know that TensorFlow Lite is an open-source deep learning framework for on-device ML inference with low latency. Its new sibling, TensorFlow Lite Micro – or TF Lite Micro for short – takes efficiency to another level, targeting microcontrollers and other devices with just kilobytes of memory.

If you have an interest in embedded machine learning, or simply have an ear to the ground in the tech world, you’re likely to have seen the recent announcement from Google’s Pete Warden about the project’s launch. Speaking at the TensorFlow Developer Summit, Pete demonstrated the framework running on an Arm Cortex-M4-based developer board and successfully handling simple speech keyword recognition.

So, why is this project a game changer? Well, because Arm and Google have just made it even easier to deploy edge ML in power-conscious environments. And a benefit of adding uTensor to the project is its extensibility for optimized kernels, such as Arm’s CMSIS-NN, and discrete accelerators – which helps neural networks to run faster and more energy-efficiently on Arm Cortex-M MCUs.

The Shift to the Edge

On-device inference has been gaining traction in recent years, with an increasing migration of functionality from the cloud to the edge device. The benefits of edge ML are well documented: reliability; consistent performance without dependence on a stable internet connection; reduced latency, since there’s no need for data to travel back and forth to the cloud; and privacy, since data may be less exposed to risk when it stays on the device.

But even where the inference is cloud-based, devices tend to rely on edge-based ML – typically on small, super-efficient processors such as Cortex-M – to wake up the rest of the system. Keyword spotting, as used by Pete to demo this new capability, is a good example of this. By allowing the main system to sleep and keeping the power requirements of the always-on element exceptionally low, embedded devices can achieve the efficiency they need to provide great performance as well as great battery life.

Open Source for Success

The other notable thing about TFLite Micro is that, like our very own Arm NN, it’s open source, which means that you can customize the example code or even train your own model if you so desire. (While TFLite Micro is the framework of choice for Cortex-M, Arm NN provides a bridge between existing neural network frameworks and power-efficient Arm Cortex-A CPUs, Arm Mali GPUs, the Arm Machine Learning processor and other third-party IP.)

The project is still in its infancy, but as more and more ML moves to the edge, this kind of open-source approach will become increasingly important.

Future Perfect

The technological challenges that once limited ‘tiny’ edge ML are rapidly evaporating. The recent launch of Arm Helium – the new vector extension of the Armv8.1-M architecture, used for future Cortex-M processors – was great news for developers of small, embedded devices. It’s set to bring up to 15 times performance uplift to ML functions and up to five times uplift to signal processing functions, compared to existing Armv8-M implementations.

Increasing the compute capabilities in these devices enables developers to write ML applications for decision-making at the source, enhancing data security while cutting down on network energy consumption, latency and bandwidth usage.

As we move towards a world in which a trillion connected devices is fast becoming a reality, Cortex-M-based microcontrollers – which can deliver on-device intelligence with just milliwatts of power – are poised to drive the edge revolution.

If you’d like to know more about ML on Arm Cortex-M, watch our on-demand technical webinar, 'Machine Learning on Arm Cortex-M Microcontrollers', below.

Watch Technical Webinar

Anonymous
AI blog
  • Get ready for Arm SME, coming soon to Android

    Eric Sondhi
    Eric Sondhi
    Build next-gen mobile AI apps with SME2—no code changes needed. Accelerate performance across devices using top AI frameworks and runtimes.
    • July 10, 2025
  • One year of Arm KleidiAI in XNNPack: Seamless and transparent AI performance

    Gian Marco Iodice
    Gian Marco Iodice
    A year of Arm KleidiAI in XNNPack brings major ML performance boosts—no code changes needed. Transparent, seamless acceleration on Arm CPUs.
    • July 10, 2025
  • Coaching AI coding agents: A guide for senior engineers

    Alex Spinelli
    Alex Spinelli
    Learn how senior engineers can coach AI coding agents to design, debug, and deliver high-quality code in immersive dev environments.
    • June 30, 2025