New Vulkan Extensions for Mobile: Maintenance Extensions

Arm Developers

The Samsung Developers team works with many companies in the mobile and gaming ecosystems. We're excited to support our partner, Arm, as they bring timely and relevant content to developers looking to build games and high-performance experiences. This Vulkan Extensions series will help developers get the most out of the new and game-changing Vulkan extensions on Samsung mobile devices.

Android is enabling a host of useful new Vulkan extensions for mobile. These new extensions are set to improve the state of graphics APIs for modern applications, enabling new use cases and changing how developers can design graphics renderers going forward. In particular, in Android R, there has been a whole set of Vulkan extensions added. These extensions will be available across various Android smartphones, including the Samsung Galaxy S21, which was recently launched on 14 January. Existing Samsung Galaxy S models, such as the Samsung Galaxy S20, also allow upgrades to Android R.

One of these new Vulkan extensions for mobile are ‘maintenance extensions’. These plug up various holes in the Vulkan specification. Mostly, a lack of these extensions can be worked around, but it is annoying for application developers to do so. Having these extensions means less friction overall, which is a very good thing.

VK_KHR_uniform_buffer_standard_layout

This extension is a quiet one, but I still feel it has a lot of impact since it removes a fundamental restriction for applications. Getting to data efficiently is the lifeblood of GPU programming.

One thing I have seen trip up developers again and again are the antiquated rules for how uniform buffers (UBO) are laid out in memory. For whatever reason, UBOs have been stuck with annoying alignment rules which go back to ancient times, yet SSBOs have nice alignment rules. Why?

As an example, let us assume we want to send an array of floats to a shader:

#version 450

layout(set = 0, binding = 0, std140) uniform UBO
{
    float values[1024];
};

layout(location = 0) out vec4 FragColor;
layout(location = 0) flat in int vIndex;

void main()
{
    FragColor = vec4(values[vIndex]);
}

If you are not used to graphics API idiosyncrasies, this looks fine, but danger lurks around the corner. Any array in a UBO will be padded out to have 16 byte elements, meaning the only way to have a tightly packed UBO is to use vec4 arrays. Somehow, legacy hardware was hardwired for this assumption. SSBOs never had this problem.

std140 vs std430

You might have run into these weird layout qualifiers in GLSL. They reference some rather old GLSL versions. std140 refers to GLSL 1.40, which was introduced in OpenGL 3.1, and it was the version uniform buffers were introduced to OpenGL.

The std140 packing rules define how variables are packed into buffers. The main quirks of std140 are:

  • Vectors are aligned to their size. Notoriously, a vec3 is aligned to 16 bytes, which have tripped up countless programmers over the years, but this is just the nature of vectors in general. Hardware tends to like aligned access to vectors.
  • Array element sizes are aligned to 16 bytes. This one makes it very wasteful to use arrays of float and vec2.

The array quirk mirrors HLSL’s cbuffer. After all, both OpenGL and D3D mapped to the same hardware. Essentially, the assumption I am making here is that hardware was only able to load 16 bytes at a time with 16 byte alignment. To extract scalars, you could always do that after the load.

std430 was introduced in GLSL 4.30 in OpenGL 4.3 and was designed to be used with SSBOs. std430 removed the array element alignment rule, which means that with std430, we can express this efficiently:

#version 450

layout(set = 0, binding = 0, std430) readonly buffer SSBO
{
    float values[1024];
};

layout(location = 0) out vec4 FragColor;
layout(location = 0) flat in int vIndex;

void main()
{
    FragColor = vec4(values[vIndex]);
}

Basically, the new extension enables std430 layout for use with UBOs as well.

#version 450
#extension GL_EXT_scalar_block_layout : require

layout(set = 0, binding = 0, std430) uniform UBO
{
    float values[1024];
};

layout(location = 0) out vec4 FragColor;
layout(location = 0) flat in int vIndex;

void main()
{
    FragColor = vec4(values[vIndex]);
}

Why not just use SSBOs then?

On some architectures, yes, that is a valid workaround. However, some architectures also have special caches which are designed specifically for UBOs. Improving memory layouts of UBOs is still valuable.

GL_EXT_scalar_block_layout?

The Vulkan GLSL extension which supports std430 UBOs goes a little further and supports the scalar layout as well. This is a completely relaxed layout scheme where alignment requirements are essentially gone, however, that requires a different Vulkan extension to work.

VK_KHR_separate_depth_stencil_layouts

Depth-stencil images are weird in general. It is natural to think of these two aspects as separate images. However, the reality is that some GPU architectures like to pack depth and stencil together into one image, especially with D24S8 formats.

Expressing image layouts with depth and stencil formats have therefore been somewhat awkward in Vulkan, especially if you want to make one aspect read-only and keep another aspect as read/write, for example.

In Vulkan 1.0, both depth and stencil needed to be in the same image layout. This means that you are either doing read-only depth-stencil or read/write depth-stencil. This was quickly identified as not being good enough for certain use cases. There are valid use cases where depth is read-only while stencil is read/write in deferred rendering for example.

Eventually, VK_KHR_maintenance2 added support for some mixed image layouts which lets us express read-only depth, read/write stencil, and vice versa:

VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_STENCIL_READ_ONLY_OPTIMAL_KHR

VK_IMAGE_LAYOUT_DEPTH_READ_ONLY_STENCIL_ATTACHMENT_OPTIMAL_KHR

Usually, this is good enough, but there is a significant caveat to this approach, which is that depth and stencil layouts must be specified and transitioned together. This means that it is not possible to render to a depth aspect, while transitioning the stencil aspect concurrently, since changing image layouts is a write operation. If the engine is not designed to couple depths and stencil together, it causes a lot of friction in implementation.

What this extension does is completely decouple image layouts for depth and stencil aspects and makes it possible to modify the depth or stencil image layouts in complete isolation. For example:

    VkImageMemoryBarrier barrier = {…};

Normally, we would have to specify both DEPTH and STENCIL aspects for depth-stencil images. Now, we can completely ignore what stencil is doing and only modify depth image layout.

    barrier.subresourceRange.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT;
    barrier.oldLayout = VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_OPTIMAL_KHR;
    barrier.newLayout = VK_IMAGE_LAYOUT_DEPTH_READ_ONLY_OPTIMAL;

Similarly, in VK_KHR_create_renderpass2, there are extension structures where you can specify stencil layouts separately from the depth layout if you wish.

typedef struct VkAttachmentDescriptionStencilLayout {
    VkStructureType sType;
    void*          pNext;
    VkImageLayout      stencilInitialLayout;
    VkImageLayout      stencilFinalLayout;
} VkAttachmentDescriptionStencilLayout;

typedef struct VkAttachmentReferenceStencilLayout {
    VkStructureType sType;
    void*          pNext;
    VkImageLayout  stencilLayout;
} VkAttachmentReferenceStencilLayout;

Like image memory barriers, it is possible to express layout transitions that only occur in either depth or stencil attachments.

VK_KHR_spirv_1_4

Each core Vulkan version has targeted a specific SPIR-V version. For Vulkan 1.0, we have SPIR-V 1.0. For Vulkan 1.1, we have SPIR-V 1.3, and for Vulkan 1.2 we have SPIR-V 1.5.

SPIR-V 1.4 was an interim version between Vulkan 1.1 and 1.2 which added some nice features, but the usefulness of this extension is largely meant for developers who like to target SPIR-V themselves. Developers using GLSL or HLSL might not find much use for this extension. Some highlights of SPIR-V 1.4 that I think are worth mentioning are listed here.

OpSelect between composite objects

OpSelect before SPIR-V 1.4 only supports selecting between scalars and vectors. SPIR-V 1.4 thus allows you to express this kind of code easily with a simple OpSelect:

    MyStruct s = cond ? MyStruct(1, 2, 3) : MyStruct(4, 5, 6);

OpCopyLogical

There are scenarios in high-level languages where you load a struct from a buffer and then place it in a function variable. If you have ever looked at SPIR-V code for this kind of scenario, glslang would copy each element of the struct one by one, which generates bloated SPIR-V code. This is because the struct type that lives in a buffer and a struct type for a function variable are not necessarily the same. Offset decorations are the major culprits here. Copying objects in SPIR-V only works when the types are exactly the same, not “almost the same”. OpCopyLogical fixes this problem where you can copy objects of types which are the same except for decorations.

Advanced loop control hints

SPIR-V 1.4 adds ways to express partial unrolling, how many iterations are expected, and such advanced hints, which can help a driver optimize better using knowledge it otherwise would not have. There is no way to express these in normal shading languages yet, but it does not seem difficult to add support for it.

Explicit look-up tables

Describing look-up tables was a bit awkward in SPIR-V. The natural way to do this in SPIR-V 1.3 is to declare an array with private storage scope with an initializer, access chain into it and load from it. However, there was never a way to express that a global variable is const, which relies on compilers to be a little smart. As a case study, let us see what glslang emits when using Vulkan 1.1 target environment:

#version 450

layout(location = 0) out float FragColor;
layout(location = 0) flat in int vIndex;

const float LUT[4] = float[](1.0, 2.0, 3.0, 4.0);

void main()
{
    FragColor = LUT[vIndex];
}

%float_1 = OpConstant %float 1
%float_2 = OpConstant %float 2
%float_3 = OpConstant %float 3
%float_4 = OpConstant %float 4
%16 = OpConstantComposite %_arr_float_uint_4 %float_1 %float_2 %float_3 %float_4

This is super weird code, but it is easy for compilers to promote to a LUT. If the compiler can prove there are no readers before the OpStore, and only one OpStore can statically happen, compiler can optimize it to const LUT.

%indexable = OpVariable %_ptr_Function__arr_float_uint_4 Function
OpStore %indexable %16
%24 = OpAccessChain %_ptr_Function_float %indexable %index
%25 = OpLoad %float %24

In SPIR-V 1.4, the NonWritable decoration can also be used with Private and Function storage variables. Add an initializer, and we get something that looks far more reasonable and obvious:

OpDecorate %indexable NonWritable
%16 = OpConstantComposite %_arr_float_uint_4 %float_1 %float_2 %float_3 %float_4

// Initialize an array with a constant expression and mark it as NonWritable.
// This is trivially a LUT.
%indexable = OpVariable %_ptr_Function__arr_float_uint_4 Function %16
%24 = OpAccessChain %_ptr_Function_float %indexable %index
%25 = OpLoad %float %24

VK_KHR_shader_subgroup_extended_types

This extension fixes a hole in Vulkan subgroup support. When subgroups were introduced, it was only possible to use subgroup operations on 32-bit values. However, with 16-bit arithmetic getting more popular, especially float16, there are use cases where you would want to use subgroup operations on smaller arithmetic types, making this kind of shader possible:

#version 450

// subgroupAdd
#extension GL_KHR_shader_subgroup_arithmetic : require

For FP16 arithmetic:

#extension GL_EXT_shader_explicit_arithmetic_types_float16 : require

For subgroup operations on FP16:

#extension GL_EXT_shader_subgroup_extended_types_float16 : require

layout(location = 0) out f16vec4 FragColor;
layout(location = 0) in f16vec4 vColor;

void main()
{
    FragColor = subgroupAdd(vColor);
}

VK_KHR_imageless_framebuffer

In most engines, using VkFramebuffer objects can feel a bit awkward, since most engine abstractions are based around some idea of:

MyRenderAPI::BindRenderTargets(colorAttachments, depthStencilAttachment)

In this model, VkFramebuffer objects introduce a lot of friction, since engines would almost certainly end up with either one of two strategies:

  • Create a VkFramebuffer for every render pass, free later.
  • Maintain a hashmap of all observed attachment and render-pass combinations.

Unfortunately, there are some … reasons why VkFramebuffer exists in the first place, but VK_KHR_imageless_framebuffer at least removes the largest pain point. This is needing to know the exact VkImageViews that we are going to use before we actually start rendering.

With imageless frame buffers, we can defer the exact VkImageViews we are going to render into until vkCmdBeginRenderPass. However, the frame buffer itself still needs to know about certain metadata ahead of time. Some drivers need to know this information unfortunately.

First, we set the VK_FRAMEBUFFER_CREATE_IMAGELESS_BIT flag in vkCreateFramebuffer. This removes the need to set pAttachments. Instead, we specify some parameters for each attachment. We pass down this structure as a pNext:

typedef struct VkFramebufferAttachmentsCreateInfo {
    VkStructureType                        sType;
    const void*                                pNext;
    uint32_t                                   attachmentImageInfoCount;
    const VkFramebufferAttachmentImageInfo*    pAttachmentImageInfos;
} VkFramebufferAttachmentsCreateInfo;

typedef struct VkFramebufferAttachmentImageInfo {
    VkStructureType   sType;
    const void*       pNext;
    VkImageCreateFlags flags;
    VkImageUsageFlags usage;
    uint32_t          width;
    uint32_t          height;
    uint32_t          layerCount;
    uint32_t          viewFormatCount;
    const VkFormat*   pViewFormats;
} VkFramebufferAttachmentImageInfo;

Essentially, we need to specify almost everything that vkCreateImage would specify. The only thing we avoid is having to know the exact image views we need to use.

To begin a render pass which uses imageless frame buffer, we pass down this struct in vkCmdBeginRenderPass instead:

typedef struct VkRenderPassAttachmentBeginInfo {
    VkStructureType   sType;
    const void*       pNext;
    uint32_t          attachmentCount;
    const VkImageView* pAttachments;
} VkRenderPassAttachmentBeginInfo;

Conclusions

Overall, I feel like this extension does not really solve the problem of having to know images up front. Knowing the resolution, usage flags of all attachments up front is basically like having to know the image views up front either way. If your engine knows all this information up-front, just not the exact image views, then this extension can be useful. The number of unique VkFramebuffer objects will likely go down as well, but otherwise, there is in my personal view room to greatly improve things.

In the next blog on the new Vulkan extensions, I explore 'legacy support extensions.'

Follow Up

Thanks to Hans-Kristian Arntzen and the team at Arm for bringing this great content to the Samsung Developers community. We hope you find this information about Vulkan extensions useful for developing your upcoming mobile games.

The Samsung Developers site has many resources for developers looking to build for and integrate with Samsung devices and services. Stay in touch with the latest news by creating a free account or by subscribing to our monthly newsletter. Visit the Marketing Resources page for information on promoting and distributing your apps and games. Finally, our developer forum is an excellent way to stay up-to-date on all things related to the Galaxy ecosystem.