We have recently announced the first GPU in the Mali Bifrost architecture family, the Mali-G71. While the overall rendering model it implements is similar to previous Mali GPUs – the Bifrost family is still a deeply pipelined tile-based renderer (see the first two blogs in this series The Mali GPU: An Abstract Machine, Part 1 - Frame Pipelining and The Mali GPU: An Abstract Machine, Part 2 - Tile-based Rendering for more information) – there are sufficient changes in the programmable shader core to require a follow up to the original "Abstract Machine" blog series.
In this blog, I introduce the block-level architecture of a stereotypical Bifrost shader core, and explain what performance expectations application developers should have of the hardware when it comes to content optimization and understanding the hardware performance counters exposed via tools such as DS-5 Streamline. This blog assumes you have read the first two parts in the series, so I would recommend starting with those if you have not read them already.
The top-level architecture of a Bifrost GPU is the same as the earlier Midgard GPUs.
Like Midgard, Bifrost is a unified shader core architecture, meaning that only a single class of shader core which is capable of executing all types of shader programs and compute kernels exists in the design.
The exact number of shader cores present in a particular silicon chip varies; our partners can choose how many shader cores they implement based on their performance needs and silicon area constraints. The Mali-G71 GPU can scale from a single core for low-end devices all the way up to 32 cores for the highest performance designs.
The graphics work for the GPU is queued in a pair of queues, one for vertex/tiling/compute workloads and one for fragment workloads, with all work for one render target being submitted as a single submission into each queue.
The workload in each queue is broken into smaller pieces and dynamically distributed across all of the shader cores in the GPU, or in the case of tiling workloads to a fixed function tiling unit. Workloads from both queues can be processed by a shader core at the same time; for example, vertex processing and fragment processing for different render targets can be running in parallel (see the first blog for more details on this pipelining methodology).
The processing units in the system share a level 2 cache to improve performance and to reduce memory bandwidth caused by repeated data fetches. The size of the L2 cache is configurable by our silicon partners depending on their requirements, but is typically 64KB per shader core in the GPU.
The number of bus ports out of the GPU to main memory, and hence the available memory bandwidth, depends on the number of shader cores implemented. In general we aim to be able to write one 32-bit pixel per core per clock, so it would be reasonable to expect an 8-core design to have a total of 256-bits of memory bandwidth (for both read and write) per clock cycle. The maximum number of AXI ports has been increased over Midgard allowing larger configurations with more than 12 cores to access a higher peak-bandwidth per clock if the downstream memory system can support it.
Note that the available memory bandwidth depends on both the GPU (frequency, AXI port width) and the downstream memory system (frequency, AXI data width, AXI latency). In many designs the AXI clock will be lower than the GPU clock, so not all of the theoretical bandwidth of the GPU is actually available to applications.
All Mali shader cores are structured as a number of fixed-function hardware blocks wrapped around a programmable core. The programmable core is the largest area of change in the Bifrost GPU family, with a number of significant changes over the Midgard "Tripipe" design discussed in the previous blog in this series:
The Bifrost programmable Execution Core consists of one or more Execution Engines – three in the case of the Mali-G71 – and a number of shared data processing units, all linked by a messaging fabric.
The Execution Engines are responsible for actually executing the programmable shader instructions, each including a single composite arithmetic processing pipeline as well as all of the required thread state for the threads that the execution engine is processing.
The arithmetic units in Bifrost implement a quad-vectorization scheme to improve functional unit utilization. Threads are grouped into bundles of four, called a quad, and each quad fills the width of a 128-bit data processing unit. From the point of view of a single thread this architecture looks like a stream of scalar 32-bit operations, which makes achieving high utilization of the hardware a relative straight forward task for the shader compiler. The example below shows how a vec3 arithmetic operation may map onto a pure SIMD unit (pipeline executes one thread per clock):
... vs a quad-based unit (pipeline executes one lane per thread for four threads per clock):
The advantages in terms of the ability to keep the hardware units full of useful work, irrespective of the vector length in the program, is clearly highlighted by these diagrams. The power efficiency and performance provided by the narrower than 32-bit types is still critically important for mobile devices, so Bifrost maintains native support for int8, int16, and fp16 data types which can be packed to fill the 128-bit data width of the data unit. A single 128-bit maths unit can therefore perform 8x fp16/int16 operations per clock cycle, or 16x int8 operations per clock cycle.
To improve performance and performance scalability for complex programs, Bifrost implements a substantially larger general-purpose register file for the shader programs to use. The Mali-G71 provides 64x 32-bit registers while still allowing the maximum thread occupancy of the GPU, removing the earlier trade off between thread count and register file usage described in this blog: ARM Mali Compute Architecture Fundamentals.
The size of the fast constant storage, used for storing OpenGL ES uniforms and Vulkan push constants, has also been increased which reduces cache-access pressure for programs using lots of constant storage.
The load/store unit handles all general purpose (non-texture) memory accesses, including vertex attribute fetch, varying fetch, buffer accesses, and thread stack accesses. It includes 16KB L1 data cache per core, which is backed by the shared L2 cache.
The load/store cache can access a single 64-byte cache line per clock cycle, and accesses across a thread quad are optimized to reduce the number of unique cache access requests required. For example, if all four threads in the quad access data inside the same cache line that data can be returned in a single cycle.
Note that this load/store merging functionality can significantly accelerate many data access patterns found in common OpenCL compute kernels, which are commonly memory access limited, so maximizing its utility in algorithm design is a key optimization objective. It is also with noting that even though the Mali arithmetic units are scalar, the data access patterns will still benefit from well written vector loads, so we still recommend writing vectorized shader and kernel code whenever possible.
The varying unit is a dedicated fixed-function varying interpolator. It implements a similar optimization strategy to the programmable arithmetic units; it vectorizes interpolation across the thread quad to ensure good functional unit utilization, and includes support for faster fp16 optimization.
The unit can interpolate 128-bits per quad per clock; e.g. interpolating a mediump (fp16) vec4 would take two cycles per four thread quad. Optimization to minimize varying value vector length, and aggressive use of fp16 rather than fp32 can therefore improve application performance.
The ZS and Blend unit is responsible for handling all accesses to the tile-memory, both for built-in OpenGL ES operations such as depth/stencil testing and color blending, as well as programmatic access to the tile buffer needed for functionality such as:
Unlike the earlier Midgard designs, where the LS Pipe was a monolithic pipeline handling load/store cache access, varying interpolation, and tile-buffer accesses, Bifrost has implemented three smaller and more efficient parallel data units. This means that tile-buffer access can run in parallel to varying interpolation, for example. Graphics algorithms making use of programmatic tile buffer access, which all tended to be very LS Pipe heavy on Midgard, should see a measurable reduction in contention for processing resources.
The texture unit implements all texture memory accesses. It includes 16KB L1 data cache per core, which is backed by the shared L2 cache. The architecture performance of this block in Mali-G71 is the same as the earlier Midgard GPUs; it can return one bilinear filtered (GL_LINEAR_MIPMAP_NEAREST) texel per clock. For example interpolating a bilinear texture lookup for each thread in a four thread quad would take four cycles.
Some texture access modes require multiple cycles to generate data:
One exception to the wide format rule, which is a new optimization in Bifrost, is depth texture sampling. Sampling from DEPTH_COMPONENT16 or DEPTH_COMPONENT24 textures, which is commonly needed for both shadow mapping techniques and deferred lighting algorithms, has been optimized and is now a single cycle lookup, doubling the performance relative to GPUs in the Midgard family.
In addition to the shader core change, Bifrost introduces a new Index-Driven Vertex Shading (IDVS) geometry processing pipeline. Earlier Mali GPUs processed all of the vertex shading before tiling, often resulting in wasted computation and bandwidth related to the varyings which only related to culled triangles (e.g. outside of the frustum, or failing a facing test).
The IDVS pipeline splits the vertex shader into two halves; one processing the position, and one processing the remaining varyings.
This flow provides two significant optimizations:
To get the most benefit from the Bifrost geometry flow is it useful to deinterleave packed vertex buffers partially; place attributes contributing to position in one packed buffer, and attributes contributing to non-position varyings in a second packed buffer. This means that the non-position varyings are not pulled into the cache for vertices which are culled and never contribute to an on-screen primitive. My colleague stacysmith has written a good blog on optimizing buffer packing to exploit this type geometry processing pipeline.
Like the earlier Midgard GPUs, Bifrost hardware supports a large number of performance counters to enable application developers to profile and optimize their applications. See more detail on the performance counters available to application developers for the Bifrost architecture.
Comments and questions welcomed as always,
Cheers,
Pete
Ok, here are a few questions!
1) Do vulkan compute threads execute on Bifrost with the same efficiency as vertex or fragment threads? Are there any disparities?
2) Can multiple components of a vector operation be scheduled across more than one lane of an execution engine in a single clock?
3) Are individual L2 cache blocks associated with a cluster of Bifrost cores? If so, what is a typical configuration? Or is a single L2 cache accessible by all cores and its size varies with respect to the number of implemented cores?
4) Have the cycle costs for complex operations (eg. div, sqrt, trig, etc) changed?
5) Does Bifrost still use a pre-core round-robin to execute an associated list of threads?