Android is enabling a host of useful new Vulkan extensions for mobile. These new extensions are set to improve the state of graphics APIs for modern applications, enabling new use cases and changing how developers can design graphics renderers going forward. In particular, in Android R, there has been a whole set of Vulkan extensions added. These extensions will be available across various Android smartphones, including the Samsung Galaxy S21, which was recently launched on 14 January. Existing Samsung Galaxy S models, such as the Samsung Galaxy S20, also allow upgrades to Android R.
One of these new Vulkan extensions for mobile are ‘maintenance extensions’. These plug up various holes in the Vulkan specification. Mostly, a lack of these extensions can be worked around, but it is annoying for application developers to do so. Having these extensions means less friction overall, which is a very good thing.
This extension is a quiet one, but I still feel it has a lot of impact since it removes a fundamental restriction for applications. Getting to data efficiently is the lifeblood of GPU programming.
One thing I have seen trip up developers again and again are the antiquated rules for how uniform buffers (UBO) are laid out in memory. For whatever reason, UBOs have been stuck with annoying alignment rules which go back to ancient times, yet SSBOs have nice alignment rules. Why?
As an example, let us assume we want to send an array of floats to a shader:
#version 450 layout(set = 0, binding = 0, std140) uniform UBO { float values[1024]; }; layout(location = 0) out vec4 FragColor; layout(location = 0) flat in int vIndex; void main() { FragColor = vec4(values[vIndex]); }
If you are not used to graphics API idiosyncrasies, this looks fine, but danger lurks around the corner. Any array in a UBO will be padded out to have 16 byte elements, meaning the only way to have a tightly packed UBO is to use vec4 arrays. Somehow, legacy hardware was hardwired for this assumption. SSBOs never had this problem.
You might have run into these weird layout qualifiers in GLSL. They reference some rather old GLSL versions. std140 refers to GLSL 1.40, which was introduced in OpenGL 3.1, and it was the version uniform buffers were introduced to OpenGL.
The std140 packing rules define how variables are packed into buffers. The main quirks of std140 are:
The array quirk mirrors HLSL’s cbuffer. After all, both OpenGL and D3D mapped to the same hardware. Essentially, the assumption I am making here is that hardware was only able to load 16 bytes at a time with 16 byte alignment. To extract scalars, you could always do that after the load.
std430 was introduced in GLSL 4.30 in OpenGL 4.3 and was designed to be used with SSBOs. std430 removed the array element alignment rule, which means that with std430, we can express this efficiently:
#version 450 layout(set = 0, binding = 0, std430) readonly buffer SSBO { float values[1024]; }; layout(location = 0) out vec4 FragColor; layout(location = 0) flat in int vIndex; void main() { FragColor = vec4(values[vIndex]); }
Basically, the new extension enables std430 layout for use with UBOs as well.
#version 450 #extension GL_EXT_scalar_block_layout : require layout(set = 0, binding = 0, std430) uniform UBO { float values[1024]; }; layout(location = 0) out vec4 FragColor; layout(location = 0) flat in int vIndex; void main() { FragColor = vec4(values[vIndex]); }
On some architectures, yes, that is a valid workaround. However, some architectures also have special caches which are designed specifically for UBOs. Improving memory layouts of UBOs is still valuable.
The Vulkan GLSL extension which supports std430 UBOs goes a little further and supports the scalar layout as well. This is a completely relaxed layout scheme where alignment requirements are essentially gone, however, that requires a different Vulkan extension to work.
Depth-stencil images are weird in general. It is natural to think of these two aspects as separate images. However, the reality is that some GPU architectures like to pack depth and stencil together into one image, especially with D24S8 formats.
Expressing image layouts with depth and stencil formats have therefore been somewhat awkward in Vulkan, especially if you want to make one aspect read-only and keep another aspect as read/write, for example.
In Vulkan 1.0, both depth and stencil needed to be in the same image layout. This means that you are either doing read-only depth-stencil or read/write depth-stencil. This was quickly identified as not being good enough for certain use cases. There are valid use cases where depth is read-only while stencil is read/write in deferred rendering for example.
Eventually, VK_KHR_maintenance2 added support for some mixed image layouts which lets us express read-only depth, read/write stencil, and vice versa:
VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_STENCIL_READ_ONLY_OPTIMAL_KHR
VK_IMAGE_LAYOUT_DEPTH_READ_ONLY_STENCIL_ATTACHMENT_OPTIMAL_KHR
Usually, this is good enough, but there is a significant caveat to this approach, which is that depth and stencil layouts must be specified and transitioned together. This means that it is not possible to render to a depth aspect, while transitioning the stencil aspect concurrently, since changing image layouts is a write operation. If the engine is not designed to couple depths and stencil together, it causes a lot of friction in implementation.
What this extension does is completely decouple image layouts for depth and stencil aspects and makes it possible to modify the depth or stencil image layouts in complete isolation. For example:
VkImageMemoryBarrier barrier = {…};
Normally, we would have to specify both DEPTH and STENCIL aspects for depth-stencil images. Now, we can completely ignore what stencil is doing and only modify depth image layout.
barrier.subresourceRange.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT; barrier.oldLayout = VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_OPTIMAL_KHR; barrier.newLayout = VK_IMAGE_LAYOUT_DEPTH_READ_ONLY_OPTIMAL;
Similarly, in VK_KHR_create_renderpass2, there are extension structures where you can specify stencil layouts separately from the depth layout if you wish.
typedef struct VkAttachmentDescriptionStencilLayout { VkStructureType sType; void* pNext; VkImageLayout stencilInitialLayout; VkImageLayout stencilFinalLayout; } VkAttachmentDescriptionStencilLayout; typedef struct VkAttachmentReferenceStencilLayout { VkStructureType sType; void* pNext; VkImageLayout stencilLayout; } VkAttachmentReferenceStencilLayout;
Like image memory barriers, it is possible to express layout transitions that only occur in either depth or stencil attachments.
Each core Vulkan version has targeted a specific SPIR-V version. For Vulkan 1.0, we have SPIR-V 1.0. For Vulkan 1.1, we have SPIR-V 1.3, and for Vulkan 1.2 we have SPIR-V 1.5.
SPIR-V 1.4 was an interim version between Vulkan 1.1 and 1.2 which added some nice features, but the usefulness of this extension is largely meant for developers who like to target SPIR-V themselves. Developers using GLSL or HLSL might not find much use for this extension. Some highlights of SPIR-V 1.4 that I think are worth mentioning are listed here.
OpSelect before SPIR-V 1.4 only supports selecting between scalars and vectors. SPIR-V 1.4 thus allows you to express this kind of code easily with a simple OpSelect:
MyStruct s = cond ? MyStruct(1, 2, 3) : MyStruct(4, 5, 6);
There are scenarios in high-level languages where you load a struct from a buffer and then place it in a function variable. If you have ever looked at SPIR-V code for this kind of scenario, glslang would copy each element of the struct one by one, which generates bloated SPIR-V code. This is because the struct type that lives in a buffer and a struct type for a function variable are not necessarily the same. Offset decorations are the major culprits here. Copying objects in SPIR-V only works when the types are exactly the same, not “almost the same”. OpCopyLogical fixes this problem where you can copy objects of types which are the same except for decorations.
SPIR-V 1.4 adds ways to express partial unrolling, how many iterations are expected, and such advanced hints, which can help a driver optimize better using knowledge it otherwise would not have. There is no way to express these in normal shading languages yet, but it does not seem difficult to add support for it.
Describing look-up tables was a bit awkward in SPIR-V. The natural way to do this in SPIR-V 1.3 is to declare an array with private storage scope with an initializer, access chain into it and load from it. However, there was never a way to express that a global variable is const, which relies on compilers to be a little smart. As a case study, let us see what glslang emits when using Vulkan 1.1 target environment:
#version 450 layout(location = 0) out float FragColor; layout(location = 0) flat in int vIndex; const float LUT[4] = float[](1.0, 2.0, 3.0, 4.0); void main() { FragColor = LUT[vIndex]; } %float_1 = OpConstant %float 1 %float_2 = OpConstant %float 2 %float_3 = OpConstant %float 3 %float_4 = OpConstant %float 4 %16 = OpConstantComposite %_arr_float_uint_4 %float_1 %float_2 %float_3 %float_4
This is super weird code, but it is easy for compilers to promote to a LUT. If the compiler can prove there are no readers before the OpStore, and only one OpStore can statically happen, compiler can optimize it to const LUT.
%indexable = OpVariable %_ptr_Function__arr_float_uint_4 Function OpStore %indexable %16 %24 = OpAccessChain %_ptr_Function_float %indexable %index %25 = OpLoad %float %24
In SPIR-V 1.4, the NonWritable decoration can also be used with Private and Function storage variables. Add an initializer, and we get something that looks far more reasonable and obvious:
OpDecorate %indexable NonWritable %16 = OpConstantComposite %_arr_float_uint_4 %float_1 %float_2 %float_3 %float_4 // Initialize an array with a constant expression and mark it as NonWritable. // This is trivially a LUT. %indexable = OpVariable %_ptr_Function__arr_float_uint_4 Function %16 %24 = OpAccessChain %_ptr_Function_float %indexable %index %25 = OpLoad %float %24
This extension fixes a hole in Vulkan subgroup support. When subgroups were introduced, it was only possible to use subgroup operations on 32-bit values. However, with 16-bit arithmetic getting more popular, especially float16, there are use cases where you would want to use subgroup operations on smaller arithmetic types, making this kind of shader possible:
#version 450 // subgroupAdd #extension GL_KHR_shader_subgroup_arithmetic : require
For FP16 arithmetic:
#extension GL_EXT_shader_explicit_arithmetic_types_float16 : require
For subgroup operations on FP16:
#extension GL_EXT_shader_subgroup_extended_types_float16 : require layout(location = 0) out f16vec4 FragColor; layout(location = 0) in f16vec4 vColor; void main() { FragColor = subgroupAdd(vColor); }
In most engines, using VkFramebuffer objects can feel a bit awkward, since most engine abstractions are based around some idea of:
MyRenderAPI::BindRenderTargets(colorAttachments, depthStencilAttachment)
In this model, VkFramebuffer objects introduce a lot of friction, since engines would almost certainly end up with either one of two strategies:
Unfortunately, there are some … reasons why VkFramebuffer exists in the first place, but VK_KHR_imageless_framebuffer at least removes the largest pain point. This is needing to know the exact VkImageViews that we are going to use before we actually start rendering.
With imageless frame buffers, we can defer the exact VkImageViews we are going to render into until vkCmdBeginRenderPass. However, the frame buffer itself still needs to know about certain metadata ahead of time. Some drivers need to know this information unfortunately.
First, we set the VK_FRAMEBUFFER_CREATE_IMAGELESS_BIT flag in vkCreateFramebuffer. This removes the need to set pAttachments. Instead, we specify some parameters for each attachment. We pass down this structure as a pNext:
typedef struct VkFramebufferAttachmentsCreateInfo { VkStructureType sType; const void* pNext; uint32_t attachmentImageInfoCount; const VkFramebufferAttachmentImageInfo* pAttachmentImageInfos; } VkFramebufferAttachmentsCreateInfo; typedef struct VkFramebufferAttachmentImageInfo { VkStructureType sType; const void* pNext; VkImageCreateFlags flags; VkImageUsageFlags usage; uint32_t width; uint32_t height; uint32_t layerCount; uint32_t viewFormatCount; const VkFormat* pViewFormats; } VkFramebufferAttachmentImageInfo;
Essentially, we need to specify almost everything that vkCreateImage would specify. The only thing we avoid is having to know the exact image views we need to use.
To begin a render pass which uses imageless frame buffer, we pass down this struct in vkCmdBeginRenderPass instead:
typedef struct VkRenderPassAttachmentBeginInfo { VkStructureType sType; const void* pNext; uint32_t attachmentCount; const VkImageView* pAttachments; } VkRenderPassAttachmentBeginInfo;
Overall, I feel like this extension does not really solve the problem of having to know images up front. Knowing the resolution, usage flags of all attachments up front is basically like having to know the image views up front either way. If your engine knows all this information up-front, just not the exact image views, then this extension can be useful. The number of unique VkFramebuffer objects will likely go down as well, but otherwise, there is in my personal view room to greatly improve things.
In the next blog on the new Vulkan extensions, I explore 'legacy support extensions.'
[CTAToken URL = "https://github.com/KhronosGroup/Vulkan-Samples/pulls/hanskristian-work" target="_blank" text="Learn more about the new Vulkan extensions" class ="green"]