Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
Mobile, Graphics, and Gaming blog Flipping the FLOPS - how ARM measures GPU compute performance
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • Mali
  • performance
  • gpu
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Flipping the FLOPS - how ARM measures GPU compute performance

Jem Davies
Jem Davies
September 11, 2013
9 minute read time.

It's time we dealt with the measurement of compute performance in GPUs. In another in a series of ARM blogs intended to enlighten and reduce the amount of confusion in the graphics industry, I'd like to cover the issue of Floating-point Operations Per Second (FLOPS, or GFLOPS or TFLOPS).                                          

In the past, Tom Olson talked about triangles per second, Ed Plowman talked about pixels per second, Sean Ellis addressed floating-point precision and hopefully we managed to amuse people as well as educate. Today let's look at compute performance - it's a useful measure.

Competition is good

...But open and honest competition is better. The market for GPUs is very competitive, with a number of companies supplying IP as well as those who make their own, for inclusion in SoCs. I love competition; how else can you win if you don't have competition? Or, as one of the most competitive people I know said to me: "What is the point in competing if you don't win?" (she was a runner, but suffice to say there are a lot of people round here who want to win at anything they commit to). In this competitive environment, we know that our partners can sometimes struggle to understand performance metrics for GPUs. They need to compare the offerings from multiple suppliers and pick the right product for their needs. This can be a complex subject, but it doesn't have to be as complex as some try to make it. I want to win on honest, open metrics.

Graphics is compute

Graphics is a really computationally intensive problem - you have to do lots of arithmetic in it, which is one reason people have been interested in utilising those capabilities for more than "just" graphics. To draw stuff, we start off by describing some objects in a three-dimensional space by dividing them into a number of triangles and listing the co-ordinates of each vertex of the triangles. We can argue about why we use triangles, and some have, but a triangle is simple, and the three points in it are guaranteed to form a plane. We then define some light sources and give them types and positions; we define the projection model (the camera) and give that a position; we define the colours and surface detail of the objects (made up of those triangles). Sometimes we add lots more detail; sometimes we animate the objects and make them move. After all that, we try to work out what a picture from the camera would look like, if it were projected onto a two-dimensional screen. As you can imagine, there are lots of 3-D equations to solve, and lots of trigonometry. Most of the numbers we use are floating-point numbers, so the rate at which we can perform floating-point arithmetic has a big effect on our graphics performance. It's not the only thing, of course, but it is important. It is certainly good to understand it. 

 Floating point FLOPs focal point


First describe the problem

In our GPUs (and lots of others) we have floating-point operations performed in all the places I described above. Some are in fixed-function units and some are in programmable units. Some examples may help here: when you load a value from a texture, the texture unit will calculate a memory address, based on the co-ordinates within the texture that you specify, and then possibly interpolate between several values in memory to produce the texture you want, possibly bi-linearly filtering between some adjacent values. And, if the texture was in a compressed format like ASTC, the values will have to be uncompressed as part of that process as well. That's a lot of calculation (integer and floating-point). It's very good for graphics, but utilising those units for more general-purpose compute is somewhere between a bit hard and impossible.

Some GPUs "just" do graphics and do not do general purpose compute.

The ARM Mali-400 family for example, was designed for OpenGL ES 2.0, which has low precision requirements. Some operations need to be performed at 32-bit precision, some 24-bit and some 16-bit. OpenCL on NEON on the ARM CPU can be used as a compute companion. 

Some GPUs do graphics and compute

For example, the Mali-T600 family of GPUs use the Midgard architecture (described by me in a previous blog). In that architecture, we have arithmetic pipelines that execute instructions like ADD and MUL. We have a balanced mix of scalar and vector (SIMD) units, so we can do multiple operations like that in parallel (e.g. four FP32, 8 FP16). We also have dot product instructions and a bunch of trigonometry instructions (like sin, cos, tan etc.).

How should you express the number of floating-point operations in a trigonometric function like sin()? 

The Mali-T600 series was designed for compute and the newest graphics APIs like OpenCL, OpenGL ES 3.0, and Microsoft DirectX11 so it supports full 32-bit precision floating-point operations conformant with IEEE-754-2008. We also do double-precision (64-bit floating-point) and as an aside, we can also do a wide variety of integer operations including 64-bit as well (traditionally GPUs lack good integer capabilities). 

To summarise, we have some GPUs with differing performance levels of integer and floating point arithmetic and differing precisions, with differing levels of usability from code. 

Then define your metric

Now comes the thorny problem of how to define a metric that measures how much arithmetic is going on in a GPU: what to measure?

Now here at ARM, we like to be inclusive: partnership is one of our big things, after all. So, I'm prepared to go as far as this: it doesn't matter so much what you do, as long as you show your working (as UK teachers would say to students, i.e. explain the method you are using). However, anyone who doesn't explain their numbers (in small print, even) must be trying to hide something, and that just won't do. So, in the spirit of openness, how do we produce our numbers? Well, the headline is about FLOPS, so for the time being, we're going to ignore integer arithmetic. Here are ARM's rules: 

  • ARM includes only directly-programmable arithmetic operations: classical arithmetic operations exposed to the shader programmer such as ADD, MUL, and vector versions of those.
  • We count the number of ADDs, MULs etc. (including those in dot product operations) that we can execute in one cycle, from a real piece of code in a computeshader. This is our architectural FLOPS rate (measured in FLOPS per cycle).
  • Although we can do some functions (like trig) really efficiently we don't add anything into the mix for these - that way lies madness.
  • From a real, fully laid-out, placed-and-routed synthesis, using real physical IP libraries (e.g. TSMC 28nm HPM, specifying channel lengths etc.), we get a maximum operating frequency. We openly specify in what conditions (e.g. slow-slow silicon corner, Vdd at -10% of Vnom etc.). This is not just a PowerPoint number: our partners should easily be able to achieve this frequency. For most partners, who would use more "typical"; parameters, they should easily exceed it. If you want to implement on a higher-speed process that burns more power, you can definitely exceed it. This is what we believe is right for an IP supplier. Silicon manufacturers will quote whatever frequency they guarantee their chips at.
  • We multiply the number of FLOPS per cycle by the number of arithmetic pipelines per core, then the number of cores, then by the frequency. That gives you a number of FLOPS. It's a big number, so usually we specify a number of GFLOPS (gigaflops), but soon we'll be using teraflops - we have teraflop cores being developed for delivery this year.
  • For the Mali-T600 series, the headline number is single-precision (32-bit floating-point). We quote a second number which is double-precision (64-bit) FLOPS. For most "graphics" GPUs, that 64-bit number is smaller. For a GPU we would target at high-performance computing or supercomputers, (and we have been asked) it might be the same, or even bigger.
  • We'll also show shader code that actually manages to include all those operations. We'll show any difference between real code run on real silicon and the architectural FLOPS rate. Currently we can achieve 97% of the architectural GFLOPS rate on real silicon. We believe that's a very high percentage number compared to others. Perhaps you know better?
  • We also run benchmarks. If you need to know the execution speed of real code, this is probably more useful information to you than looking at architectural numbers! ARM likes independent, third-party benchmarks and there are a host of them to measure performance achieved (rather than architectural numbers). Common ones used for compute-intensive numerical applications are SAXPY and SGEMM originally from the LINPACK and LAPACKBLAS libraries, although recently companies have been starting to look at GPU computing on consumer devices, e.g. with CLBenchmark from Kishonti. This is a large subject and is really best left to a later blog.

What we don't do

ARM does not include FLOPS from fixed-function units, or things only available from graphics, e.g. texture units, blending units, varying interpolation, triangle setup, Z-culling etc.

  • We don't include any relaxed precision operations. We only include full IEEE-compliant ops. The subject of IEEE compliance, precision and rounding modes is complex and there is room for significant confusion here. Explaining and demystifying this is best left to a later blog.
  • We don't make any assumptions about how many operations were involved in calculating any of the library functions that might be implemented as instructions.
  • We don't quote a theoretical maximum frequency that we cannot justify from a real layout/synthesis. We can provide the EDA tools report to back up our claims.
  • We don't quote a maximum frequency for ridiculously hot, leaky processes that cannot be sensibly used by most of our partners.
  • We don't multiply the number we come up with by the ZIP code of our office in San Jose, or shift left by the telephone number of our HQ.

And finally

I have described how we define and produce our architectural FLOPS numbers. It should give you all the ammunition you need to go and question your supplier about how they calculate theirs. Hopefully that will lead to useful, productive conversations. Maybe we need a standard. Maybe it will lead to us changing the way we define our numbers to match others' methods. That's OK, as long as we're open about it. 

I've also indicated the role that benchmarks need to play in describing real-world performance. We need to get industry agreement about which benchmarks matter. Too many benchmarks can lead to confusion.

Like our method? Hate it? Think we're wrong? Want to suggest anything different? Got any amusing tales to tell about how some others do it? Let us know. Feel free to comment to this blog.

Anonymous
Parents
  • Sean Lumly
    Sean Lumly over 11 years ago

    The Mali seems really well positioned to take care of partners optimizing for different targets (eg. perf vs. cost/mm2). I would love to read a blog post on this subject, as I'm sure it would be very enlightening beyond what has already been shared.

    Oh, and congrats on a great interview with Anandtech. I thoroughly enjoyed watching it!

    Sean

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
Comment
  • Sean Lumly
    Sean Lumly over 11 years ago

    The Mali seems really well positioned to take care of partners optimizing for different targets (eg. perf vs. cost/mm2). I would love to read a blog post on this subject, as I'm sure it would be very enlightening beyond what has already been shared.

    Oh, and congrats on a great interview with Anandtech. I thoroughly enjoyed watching it!

    Sean

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
Children
No Data
Mobile, Graphics, and Gaming blog
  • What is Arm Performance Studio?

    Jai Schrem
    Jai Schrem
    Arm Performance Studio gives developers free tools to analyze performance, debug graphics, and optimize apps on Arm platforms.
    • August 27, 2025
  • How Neural Super Sampling works: Architecture, training, and inference

    Liam O'Neil
    Liam O'Neil
    A deep dive into a practical, ML-powered approach to temporal super sampling.
    • August 12, 2025
  • Start experimenting with Neural Super Sampling for mobile graphics today

    Sergio Alapont Granero
    Sergio Alapont Granero
    Laying the foundation for neural upscaling to enable sharper, smoother, AI-powered gaming on next-generation Arm GPUs.
    • August 12, 2025