There is no standard measurement for machine learning performance today, meaning there is no single answer for how companies build a processor for ML across all use cases while balancing compute and memory constraints. For the longest time, every group would pick a definition and test to suit their own needs. This lack of common understanding of performance hinders customers buying decisions and slows down growth of the industry, limiting the rate of AI innovation in the world today.
To solve these challenges and accelerate innovation in the industry, we need standard benchmarks, datasets, and best practices in all markets. Arm and MLCommons a global engineering consortium are working together to push the industry forward in all three of these areas. By combining these three, we can create sustainable and healthy growth of breakthrough applications for the world.
MLCommons is a global engineering nonprofit which successfully employs a holistic approach to measuring performance, creating datasets and best practices. The benchmarking group enable open and transparent consensus with competing entities to create a fair playing field. And they are supported by the 30+ founding members from commercial and research communities. Their practices enforce replicability to ensure reliable results and are complementary to micro benchmark efforts. MLCommons is keeping benchmarking efforts affordable, so all can participate to help grow the market and increase innovation together. Dave Kanter elaborates on MLCommons below:
“We are at a unique inflection point in the development of ML and its ability to solve challenges in communication, access to information, health, safety, commerce, and education,” said David Kanter, Executive Director of MLCommons. At MLCommons, the brightest minds from leading organizations across the globe will collaborate to accelerate machine learning innovation for the benefit of humanity as a whole.”
Arm and other AI pioneers are working together with MLCommons to share and deliver industry insights and market trends in Mobile, servers, HPC, tiny embedded and autonomous, to ensure that the benchmarks are representative of real-world use cases. (see MLCommons organization diagram Below for more information).
Figure: MLCommons organization diagram
Companies often balance efforts between internal benchmarking and industry benchmarking. Internal efforts focus on improving the processor IP for the needs of specific customers, while Industry benchmarking efforts improve processor IP for the broad needs of the industry. In order to achieve this balance in a cost-efficient way, we need industry-wide support to create benchmarks, datasets, and best practices to empower the whole industry. Working collaboratively can be a powerful enabler of improved business performance, but successful collaboration rarely emerges out of the blue and should not be taken for granted. So if you are thinking about joining the efforts, check out MLCommons for more information.