Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
AI blog Why standard benchmarks matter to AI innovation?
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • Artificial Intelligence (AI)
  • Machine Learning (ML)
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Why standard benchmarks matter to AI innovation?

Dylan Zika
Dylan Zika
December 4, 2020
2 minute read time.

There is no standard measurement for machine learning performance today, meaning there is no single answer for how companies build a processor for ML across all use cases while balancing compute and memory constraints. For the longest time, every group would pick a definition and test to suit their own needs. This lack of common understanding of performance hinders customers buying decisions and slows down growth of the industry, limiting the rate of AI innovation in the world today.

To solve these challenges and accelerate innovation in the industry, we need standard benchmarks, datasets, and best practices in all markets. Arm and MLCommons a global engineering consortium are working together to push the industry forward in all three of these areas. By combining these three, we can create sustainable and healthy growth of breakthrough applications for the world.

  • Benchmarks: Benchmarks made tremendous impact to the end users and consumers with results used for purchasing decisions worth one billion USD and growing rapidly. MLCommons has defined ML performance for the industry with ML Perf, and has over 2000 formal submissions from 30 organizations.
  • Datasets: Before having a benchmark in AI, you have to start with a good data. Innovation can only occur when there are open datasets that can be both commercial and academic entities. To help enable open datasets, MLCommons is creating People Speech with over 80,000+ hours of diverse language speech. This initiative is the largest dataset available for industry-wide use, the ImageNet of speech datasets.
  • Best practices: To mature, the industry needs best practices. To support best practices, MLCommons has been running an initiative called MLCube (https://github.com/mlperf/mlcube) providing portable models for experimentation and benchmarking. MLCommons also provides other working groups focused on system design, logging, and power measurement.

Why MLCommons?

MLCommons is a global engineering nonprofit which successfully employs a holistic approach to measuring performance, creating datasets and best practices. The benchmarking group enable open and transparent consensus with competing entities to create a fair playing field. And they are supported by the 30+ founding members from commercial and research communities. Their practices enforce replicability to ensure reliable results and are complementary to micro benchmark efforts. MLCommons is keeping benchmarking efforts affordable, so all can participate to help grow the market and increase innovation together. Dave Kanter elaborates on MLCommons below:

“We are at a unique inflection point in the development of ML and its ability to solve challenges in communication, access to information, health, safety, commerce, and education,” said David Kanter, Executive Director of MLCommons. At MLCommons, the brightest minds from leading organizations across the globe will collaborate to accelerate machine learning innovation for the benefit of humanity as a whole.”

What is an IP provider's place in MLCommons?

Arm and other AI pioneers are working together with MLCommons to share and deliver industry insights and market trends in Mobile, servers, HPC, tiny embedded and autonomous, to ensure that the benchmarks are representative of real-world use cases. (see MLCommons organization diagram Below for more information).

ML commons organization diagram

Figure: MLCommons organization diagram

Can companies act alone?

Companies often balance efforts between internal benchmarking and industry benchmarking. Internal efforts focus on improving the processor IP for the needs of specific customers, while Industry benchmarking efforts improve processor IP for the broad needs of the industry. In order to achieve this balance in a cost-efficient way, we need industry-wide support to create benchmarks, datasets, and best practices to empower the whole industry. Working collaboratively can be a powerful enabler of improved business performance, but successful collaboration rarely emerges out of the blue and should not be taken for granted. So if you are thinking about joining the efforts, check out MLCommons for more information.

Anonymous
AI blog
  • Empowering engineers with AI-enabled security code review

    Michalis Spyrou
    Michalis Spyrou
    Metis uses AI to detect design flaws and logic bugs missed by traditional tools, helping teams build secure software with context-aware reviews.
    • July 17, 2025
  • Get ready for Arm SME, coming soon to Android

    Eric Sondhi
    Eric Sondhi
    Build next-gen mobile AI apps with SME2—no code changes needed. Accelerate performance across devices using top AI frameworks and runtimes.
    • July 10, 2025
  • One year of Arm KleidiAI in XNNPack: Seamless and transparent AI performance

    Gian Marco Iodice
    Gian Marco Iodice
    A year of Arm KleidiAI in XNNPack brings major ML performance boosts—no code changes needed. Transparent, seamless acceleration on Arm CPUs.
    • July 10, 2025