OpenGL ES 3.1 isn’t as obviously a big deal as its predecessor, OpenGL ES 3.0, which added over two dozen major features, and extended both the API and the shading language in almost every possible direction. After all, ES 3.0 took five years to create, and was intended to drive hardware requirements for a new generation of mobile GPUs. ES 3.1, on the other hand, was done in about a year, and is explicitly designed to run on most if not all existing ES 3.0-capable hardware. It’s no wonder that by comparison, it looks like a relatively modest advance. But is it? Here’s my view:
Many of the features in the new API amount to filling in gaps in ES 3.0 (bitfield operations in the shading language! Multidimensional arrays!), and continuing our efforts (which began in ES 3.0) to tighten the specification, improve application portability across implementations, and reduce application and driver overhead. Don’t get me wrong, these features are very important – they make life much better for programmers, leading ultimately to more, better, and cooler applications for everyone. And I can tell you, specifying and testing them is hard (and essential) work. But they’re kind of hard to appreciate unless you’re a standards geek, or a graphics programmer.
However, I claim that OpenGL ES 3.1’s headline features are going to change the way we do mobile graphics, in ways that will be obvious to everyone. For my money, there are two that stand out; first, it adds compute shaders, which allow the GPU to be used for general-purpose computing, tightly coupled with GPU-based graphics rendering. Second, it adds indirect drawing commands, which allow the GPU to read drawing command parameters from memory instead of receiving them directly from the CPU. I’ll explain why that’s important in a moment.
Compute support in OpenGL ES 3.1 consists of a handful of new features that sound minor when considered individually, but have huge implications when combined. (This happens all the time in the tech industry. Hypertext is a way of linking related documents and data (remember HyperCard?), and the internet is a (large) group of networked computers that agree to exchange data using a standard set of protocols. Put them together, and you get the World-Wide Web, which is a different animal altogether.)
The first critical compute feature OpenGL ES 3.1 adds is direct access to memory: shader programs can read and write arbitrary data stored in memory buffers or texture images. The second critical feature is a set of synchronization primitives that allow applications to control the ordering of memory accesses by different threads running in parallel on the GPU, so that results don’t depend on what order the threads run in. The third is the ability to create and dispatch compute shaders, programs for the GPU whose invocations correspond to iterations of a nested loop rather than to graphics constructs like pixels or vertices.
With these features, you can do things like this: Create a 2D array in GPU memory representing points on a piece of cloth, and global data representing objects or forces acting on the cloth. Dispatch a compute shader that creates a thread for every point in the array. Each thread reads the position and velocity of its point on the cloth, and updates them based on the forces acting on the cloth.
Figure 1: A rather nice carpet, animated by an ES 3.1-style compute shader, has a frightening encounter with a big shiny flying donut. Photo (and demo) courtesy of Sylwester Bala, Mali Demo Team. You can watch the video here.
Indirect drawing sounds even more innocent than the various features that support GPU computing; it just means that the GPU can accept a drawing command whose parameters (such as how many items to draw, and where to find their vertices) are stored in memory, rather than passed as function-call arguments by the CPU. What makes this interesting is that the memory buffer containing the parameters is fully accessible to the GPU – which means that a compute shader can write them. So for example, an application can fire off a compute shader that generates geometry data into a vertex buffer object, and also fills in an indirect drawing command that describes that data. After the compute shader finishes, the GPU can proceed to render the geometry as described in the buffer, without any additional work by the application or the CPU.
There’s other interesting stuff in OpenGL ES 3.1, but I’m out of space to talk about it. By the time you read this, the official specification will be available in the Khronos OpenGL ES registry, and there’ll be lots of information floating around following GDC presentations by myself and my fellow Working Group members. Incidentally, if you’re attending GDC, I hope you’ll stop by the ARM booth or one of our technical talks, and/or come to the Khronos OpenGL ES session, where we’ll walk through the OpenGL ES 3.1 specification in detail.
When will you see ES 3.1 in consumer devices? It’s up to the device makers, of course; but the Khronos conformance test should be up and running by this summer, and the API is meant to run on existing OpenGL ES 3.0 hardware, so it shouldn’t be terribly long. It will certainly be supported* on the ARM Mali Midgard GPUs.
As always – got comments or questions? Drop me a line!
Hi Tom,
Here's a nice demo the Mali guys did on Open GL ES 3.1 so all can see what you're discussing.
It's on a Samsung Electronics Galaxy Note 10.1 based on Exynos Octa 5 (5420)