Chinese version of this blog - thanks to vincent for the translation!
In my previous blog I started defining an abstract machine which can be used to describe the application-visible behaviors of the Mali GPU and driver software. The purpose of this machine is to give developers a mental model of the interesting behaviors beneath the OpenGL ES API, which can in turn be used to explain issues which impact their application’s performance. I will use this model in the future blogs of this series to explore some common performance pot-holes which developers encounter when developing graphics applications.
This blog continues the development of this abstract machine, looking at the tile-based rendering model of the Mali GPU family. I’ll assume you've read the first blog on pipelining; if you haven’t I would suggest reading that first.
In a traditional mains-powered desktop GPU architecture — commonly called an immediate mode architecture — the fragment shaders are executed on each primitive, in each draw call, in sequence. Each primitive is rendered to completion before starting the next one, with an algorithm which approximates to:
foreach( primitive ) foreach( fragment ) render fragment
As any triangle in the stream may cover any part of the screen the working set of data maintained by these renderers is large; typically at least a full-screen size color buffer, depth buffer, and possibly a stencil buffer too. A typical working set for a modern device will be 32 bits-per-pixel (bpp) color, and 32bpp packed depth/stencil. A 1080p display therefore has a working set of 16MB, and a 4k2k TV has a working set of 64MB. Due to their size these working buffers must be stored off-chip in a DRAM.
Every blending, depth testing, and stencil testing operation requires the current value of the data for the current fragment’s pixel coordinate to be fetched from this working set. All fragments shaded will typically touch this working set, so at high resolutions the bandwidth load placed on this memory can be exceptionally high, with multiple read-modify-write operations per fragment, although caching can mitigate this slightly. This need for high bandwidth access in turn drives the need for a wide memory interface with lots of pins, as well as specialized high-frequency memory, both of which result in external memory accesses which are particularly energy intensive.
The Mali GPU family takes a very different approach, commonly called tile-based rendering, designed to minimize the amount of power hungry external memory accesses which are needed during rendering. As described in the first blog in this series, Mali uses a distinct two-pass rendering algorithm for each render target. It first executes all of the geometry processing, and then executes all of the fragment processing. During the geometry processing stage, Mali GPUs break up the screen into small 16x16 pixel tiles and construct a list of which rendering primitives are present in each tile. When the GPU fragment shading step runs, each shader core processes one 16x16 pixel tile at a time, rendering it to completion before starting the next one. For tile-based architectures the algorithm equates to:
foreach( tile ) foreach( primitive in tile ) foreach( fragment in primitive in tile ) render fragment
As a 16x16 tile is only a small fraction of the total screen area it is possible to keep the entire working set (color, depth, and stencil) for a whole tile in a fast RAM which is tightly coupled with the GPU shader core.
This tile-based approach has a number of advantages. They are mostly transparent to the developer but worth knowing about, in particular when trying to understand bandwidth costs of your content:
It is clear from the list above that tile-based rendering carries a number of advantages, in particular giving very significant reductions in the bandwidth and power associated with framebuffer data, as well as being able to provide low-cost anti-aliasing. What is the downside?
The principal additional overhead of any tile-based rendering scheme is the point of hand-over from the vertex shader to the fragment shader. The output of the geometry processing stage, the per-vertex varyings and tiler intermediate state, must be written out to main memory and then re-read by the fragment processing stage. There is therefore a balance to be struck between costing extra bandwidth for the varying data and tiler state, and saving bandwidth for the framebuffer data.
In modern consumer electronics today there is a significant shift towards higher resolution displays; 1080p is now normal for smartphones, tablets such as the Mali-T604 powered Google Nexus 10 are running at WQXGA (2560x1600), and 4k2k is becoming the new “must have” in the television market. Screen resolution, and hence framebuffer bandwidth, is growing fast. In this area Mali really shines, and does so in a manner which is mostly transparent to the application developer - you get all of these goodies for free with no application changes!
On the geometry side of things, Mali copes well with complexity. Many high-end benchmarks are approaching a million triangles a frame, which is an order of magnitude (or two) more complex than popular gaming applications on the Android app stores. However, as the intermediate geometry data does hit main memory there are some useful tips and tricks which can be applied to fine tune the GPU performance, and get the best out of the system. These are worth an entire blog by themselves, so we’ll cover these at a later point in this series.
In this blog I have compared and contrasted the desktop-style immediate mode renderer, and the tile-based approach used by Mali, looking in particular at the memory bandwidth implications of both.
Tune in next time and I’ll finish off the definition of the abstract machine, looking at a simple block model of the Mali shader core itself. Once we have that out of the way we can get on with the useful part of the series: putting this model to work and earning a living optimizing your applications running on Mali.
Note: The next blog in this series has now been published. You can read it by clicking on the button below.
[CTAToken URL = "https://community.arm.com/graphics/b/blog/posts/the-mali-gpu-an-abstract-machine-part-3---the-midgard-shader-core" target="_blank" text="Read next blog: The Midgard Shader Core" class ="green"]
As always comments and questions more than welcome,
Pete
Thanks peterharris!
Thanks for the comments - nice to know someone actually reads these .
I actually set aside my morning to read and re-read your 3-part series of articles when I found them late last night. I had to look up a lot of information to get a better handle of the high level concepts, and then formulate questions if I could not find the information. I've been also trying to think of a way to reduce the geometry transfer bandwidth for tile based renders. I've considered the obvious case of turning the AFBC logic on the vertex data to reduce the data size. I've also thought about having a die-configurable chunk of memory sitting on chip that geometry could be written to, and when it was full, the remaining data could spill over into DDR. In this way, it could cut down on the amount of memory being written out to RAM, thus reducing power and improving performance. Perhaps the L2 caches could run double-duty in this regard? Forgive me if this sounds silly, I do enjoy thinking of these types of problems. I get a bit too excited about this stuff..
I am very pleased to hear about the MSAA 16x implementation. Ignoring the edge-cases (tiny triangles, copious overdraw, etc), even for extremely simple render-to-screen scenes you're likely not to feel it, as Mali T6xx/T7xx has no trouble with 4-cycle shaders, and one can always choose a lower multiplier. It seems truly an elegant use of Mali's TBDR.
Perhaps a post should be done on AA alone (if there hasn't been already). While finding info about MSAA is easy, finding good examples is not -- even finding images of examples of 16x MSAA is very hard. It floors me how rarely I find this Mali feature used in Android games (and I specifically look out for it) given the negligible performance cost and dead-simple implementation. The only game that I can recall to have found that has natively implemented MSAA was a game called Royal Revolt which runs at the full 2.5K on my T604 powered Nexus 10. Though the shaders are very simple, the game looks amazing, thanks in large part to the AA.
I may have one or two questions in your follow up blog post on the architecture, but I will try to make them good ones!
Thanks again Pete,
Sean