This is the second in a series of blogs about floating-point quality in GPUs. In part I, I claimed that a lot of programmers don't really understand floating-point numbers, and argued that if you're going to use it for anything remotely tricky, you'd better be prepared to learn how it works in more detail than you probably wanted to. I explained Stuart's test, and showed that it reveals how many bits of floating-point precision are used in the GPU fragment shader. That was good fun, but the test has other interesting things to tell us. In this installment, I'll talk about those.

## The test, and the results

Stuart's test program uses a special fragment shader to compute a gray-scale intensity value at every pixel on the screen. My version is shown here, as a reminder.

precision highp float; uniform vec2 resolution; void main( void ) { float y = ( gl_FragCoord.y / resolution.y ) * 26.0; float x = 1.0 "" ( gl_FragCoord.x / resolution.x ); float b = fract( pow( 2.0, floor(y) ) + x ); if(fract(y) >= 0.9) b = 0.0; gl_FragColor = vec4(b, b, b, 1.0 ); }

*Box 1: Youi Labs GPU precision shader (slightly modified)*

In my previous post, I went over the code in detail, so here I'll just summarize: the shader draws a series of 26 horizontal bars. The gray value for each bar is, ideally, a linear ramp from 1.0 (white) on the left side to 0.0 (black) on the right. However, the gray value is corrupted by first adding it to 2B (where B is the index of the bar the pixel is in), and then throwing away the integer part of the sum. This reduces the precision of the gray value by one bit in each successive bar, causing the ramps to become increasingly blocky. Eventually, all the bits are thrown away and the bar becomes completely black.

In his blog, Stuart published pictures of the images this shader draws for six mobile GPUs and one high-end desktop graphics card. The images vary in two basic ways. One is just the number of non-black bars; as we saw last time, that number turns out to equal the number of fractional bits in the shader engine's floating-point significand. The other is perhaps more striking: the bars make quite different patterns on the screen. That's the issue I want to talk about here.

When we look at the images, there seem to be two distinct populations: one group, consisting of the Nvidia Tegra 3, Vivante GC4000, and Qualcomm Adreno 225, produces bars that are white all the way to the left edge of the screen, but that trail off to the right. The resulting shape reminds me of a killer whale's dorsal fin, so I'll call this the "orca" pattern (see figure 1). The other group, consisting of the NVIDIA desktop GPU and the two ARM Mali devices, produces a symmetrical pattern which I'll call the "beehive" shape (see figure 2). (The Imagination SGX544 does something slightly different, but seems to be in the beehive camp as well.) What do these shapes tell us? Is one better than the other?

*Figure 1: "Orca" pattern (Huawei Ascend D1 / Vivante GC4000)*

*Figure 2: "Beehive" pattern (Nexus 10 / Mali-T604)*

In his blog, Stuart equates good floating-point quality with having a lot of bars that are white all the way to the left edge of the screen. So, he really likes the "orca" GPUs, and isn't impressed with the "beehive" camp. In particular, he says:

"

The drift from the left edge indicates error in calculation (areas that should be white are black), which would translate into undesirable visual glitches if not accounted for."

Is he right? To find out, we'll have to look at what's going in inside the GPU's floating-point units when the shader is running; but before we do that, we have to dive a little deeper into how floating-point works.

### More detail than you really wanted, part 2

In part I of this series, I gave a quick introduction to a generic single-precision floating-point format with eight bits of exponent and twenty-four bits (including the hidden bit) of significand. I ended with an example of what happens when you add two numbers of different magnitude, say eight million and 11.3125. We start with this:

` (-1)`

^{0} x 2^{22} x 1.11101000010010000000000 = 8000000.0

` (-1)`

^{0} x 2^{3 }0x 1.01101010000000000000000 = 11.3125

and align the binary points by shifting the smaller number nineteen bits to the right. After we do that, the smaller number no longer has the usual '1' bit to the left of the binary point, so we say that it is *denormalized*. The numbers we want to add now look like this:

` (-1)`

^{0} x 2^{22} x 1.11101000010010000000000

` (-1)`

^{0} x 2^{22} x 0.00000000000000000010110(1010...0)

and the sum is obviously

` (-1)`

^{0} x 2^{22} x 1.11101000010010000010110(1010...0) = 8000011.3125

Notice that the red bits don't fit into the significand anymore. The question is, what should we do with them? The easiest thing is just to drop them on the floor; in the numerics business, that's called *round-toward-zero *(RTZ) or *truncation*. It is equivalent to pretending the red bits are all zero, even if they aren't. Converting ones into zeros introduces error; in this case, rounding toward zero gives us

`(-1)`

^{0} x 2^{22} x 1.11101000010010000010110 = 8000011.0

and a total error of 0.3125. If you think about it, the *worst-case* error occurs when all the red bits started out as ones, at which point the error we're introducing into the significand is

or about 2^{-23}

If we're willing to work just a little harder, we can do better. Instead of dropping the red bits, we can round them up or down to whichever 24-bit significand value is closer. That turns out to be easy: if the first red bit is zero, we truncate (round down) as above. If it's one, *and* at least one other red bit is a one, we round up. In the example above, our ideal sum

`(-1)`

^{0} × 2^{22} × 1.11101000010010000010110(1010...0) = 8000011.3125

is rounded up to

(-1)^{0} x 2^{22} x 1.11101000010010000010111 = 8000011.5

for a total error of 0.1875, quite a bit better than the round-toward-zero result. If the *first* red bit is a one, and no *other* red bit is, we're exactly halfway between two representable values; what do we do then? Various tie-breaking rules are possible; the preferred one (and the required default for IEEE-754-2008) is to round whichever way will produce a zero in the least significant bit of the significand. This is called *round-to-nearest-even* (RNE). If we use this rule (or any other round-to-nearest rule), the worst-case error is 2^{-24} rather than 2^{-23}. That may not sound like much improvement, but think about it: using RNE instead of RTZ cuts the worst-case error *in half*. That's a big deal; it's almost like getting an extra bit of precision for free.

### Round-up Time

What does all this have to do with the orcas and the beehives in Stuart Russell's images? His shader (see Box 1 above) does more or less what we did in the examples in the previous section: it adds a series of ever-larger integers to a set of grey values between 1.0 and 0.0, causing an ever-greater loss of precision. Let's consider what happens in the 23^{rd} bar, where we're adding the grey value to 2^{22}. The power of two is represented as

`(-1)`

^{0} x 2^{22} x 1.00000000000000000000000 = 4194304.0

The next largest value we can represent in our floating-point number system is

`(-1)`

^{0} x 2^{22} x 1.00000000000000000000001 = 4194304.5

and the *next* largest one is

`(-1)`

^{0} x 2^{22} × 1.00000000000000000000010 = 4194305.0

The grey value we're adding to 2^{22} is between zero and one, so clearly the floating-point unit is going to have to round the sum to one of these three values. After the addition, the shader throws away the integer part of the sum, so we're going to be left with one of only two possible results: 0.0, or 0.5.

A GPU using RTZ always rounds positive numbers down. So, if the gray value is less than 0.5, the sum will be rounded down to 4194304.0, and we'll end up with an output grey value of 0.0. If the gray value is greater than 0.5, the sum will be rounded (down again) to 4194304.5, and we'll end up with an output value of 0.5. Looking at the topmost visible bar in Figure 1, that's exactly what we see; the right half of the bar (initial grey values less than 0.5) becomes black, and the left half (initial values greater than 0.5) becomes 50% grey. The "orca" GPUs are using round-toward-zero!

A GPU using RNE, on the other hand, will round the sum to the nearest value it can represent. When the grey value is less than 0.25, the sum will be rounded down to 4194304.0, producing black. When it is between 0.25 and 0.75, the sum will be rounded to 4194304.5, producing 50% grey. When the grey value is above 0.75, the sum will be rounded up to 4194305.0, which corresponds logically to white; however, when the integer part of the sum is discarded, we'll end up with black again. That's what produces the "drift from the left edge" that Stuart refers to in his blog, and that we see in Figure 2. *The "beehive" GPUs are using round-to-nearest.*

To make visualizing this a little easier, we can modify the shader so that it preserves the grey value of 1.0 that results when the sum is rounded up to an integer. Box 2 shows the code, and figure 3 shows the result of running it on another "beehive" GPU, an AMD desktop part (Radeon HD3650). Compared to figure 2, the bars now extend all the way to the left edge of the image, and there's an extra twenty-fourth bar corresponding to that "extra bit of precision" that round-to-nearest (sort of) gives us.

precision highp float; uniform vec2 resolution; void main( void ) { float y = ( gl_FragCoord.y / resolution.y ) * 26.0; float x = 1.0 — ( gl_FragCoord.x / resolution.x ); float p = pow( 2.0, floor(y) ); float b = ( p + x ) - p; if(fract(y) >= 0.9) b = 0.0; gl_FragColor = vec4(b, b, b, 1.0 ); }

*Box 2: Precision shader modified to produce output in range [0.0, 1.0]*

*Figure 3: Shader modified to allow grey levels in range (0.0,1.0)*

Looking at pictures is fun, but in this case the difference is easier to see if we just plot the input and output grey values for the top few bars, for both "orca" and "beehive" GPUs.

Figure 4 shows what you get. (What you're seeing is exactly the same data as in Figures 1 and 3, at least for bars 22-24 — we're just viewing it as a graph, rather than as a grey value.) What do we see? The RNE output is a better approximation to the input than the RTZ output; also, its average error is zero, while the RTZ output has a bias (i.e., a non-zero average value).

Still not convinced? In figure 5 I've plotted the *error* in the RTZ and RNE curves — that is, the absolute value of the difference between output and input. If you study them a bit, and integrate the area under the curves in your head, you'll be pleased (but not surprised!) to discover that on average, ** the RNE method produces exactly half the error of the RTZ method**.

*Figure 4: Graph of RNE output *

### Whose GPU has the highest quality floating-point unit?

Now we can finally answer the question: What do the shapes in Stuart's images tell us about floating-point quality in the GPUs he tested? In his view, they mean that the RTZ GPUs (specifically, Vivante GC4000 and Qualcomm Adreno 225) produced the highest quality output. But in fact, the opposite is true: GPUs that perform RNE rounding, such as ARM's Mali-T604, produce more accurate results and lower error. That's why round-to-nearest-even is specified as the default rounding method in IEEE-754-2008. Stuart is welcome to prefer the orca shape over the beehive; but it'll have to be on the grounds of personal taste, not quality.