Arm joins Facebook and Microsoft to bring next-generation AI to life

News highlights

  • Arm joins Facebook, Microsoft, and others on the ONNX format project to empower innovation within the AI community.
  • Arm is partnering with Facebook to optimize the Facebook application on billions of Arm-based devices.
  • As a leader in AI, Arm, with more than 100 billion Arm-based chips shipped, has joined the ONNX format to optimize AI for edge devices.

At Arm, our commitment to artificial intelligence (AI) starts with developing and delivering technologies that are secure, scalable, and power-efficient. After all, AI is already simplifying and transforming our lives, but we’re really only scratching the surface of what’s possible. AI will increasingly happen on end device systems whether it’s your smartphone or your car, which means we’ll continue to see more compute power and AI algorithms. As part of that effort, we’re excited to announce that we’ve joined industry leaders on an open-source project that aims to enable interoperability and innovation in the AI framework ecosystem. The Open Neural Network Exchange (ONNX) format was co-developed by Facebook and Microsoft as a standard for representing deep learning models that empower developers with more choice and flexibility as they design their AI frameworks. ONNX is another step towards developing a more open ecosystem that provides developers with state-of-the-art tools and technologies to drive more innovation within the AI community.


In short, standardization is good for both the compute industry and for developers because it enables a level of interoperability between various products and frameworks, while streamlining the path from development to production. At Arm, we’re already actively engaged to accelerate Caffe2 for our Arm Cortex-A CPUs as well as for the millions of Arm Mali GPU-based devices which currently use the Facebook application. ONNX is another avenue for Arm to continue to optimize AI technologies from mobile to automotive to industrial automation, and the Arm architecture is already powering AI with more than 100 billion chips shipped so far. As part of the ONNX format, we’ll join industry leaders in the space to bring manageable, deployable AI solutions to market through an open-source approach.

Arm believes ONNX is enabling the path from research to deploying scalable AI-based solutions, allowing developers to focus on implementation across multiple markets beyond mobile including autonomous driving, server, and industrial automation.

Just the beginning

With AI already running on Arm architecture today, we’ll continue to evolve our architecture to meet the needs of emerging AI and machine learning workloads. By joining the ONNX project, we are looking forward to playing our role in building the AI frameworks of the future, enabling millions of developers with easy access to the industry leading performance and efficiency of AI running on Arm.

Learn more about Arm AI solutions

  • This is interesting news but I wonder if somebody at Arm could get in contact with me as I have a new patent applied disruptive  science/tech for greatly accelerating convolutional neural net operation on standard processor architectures as well as FPGA and ASIC. This would fit well within the NEON coprocessor or MALI and entirely removes the need for multiplication while maintaining full accuracy on arbitrary net designs.

  • That is a very good thing !

    And at the same time, all of this is so in the direction of walking, that it is natural that Arm be of the part :D