Numerous models of Mali (f.e. Galaxy 10Se) using Mali-G76 (Bifrost 2nd gen) are producing VK_DEVICE_LOST error when rendering 250K triangles or more. I read about the 180mb driver limit on the Mali systems, and how that simply hands a VK_DEVICE_LOST error back to the developer, and then it is up to them to split render passes. We don't have this issue with Adreno and other Android devices. iOS also has a parameter buffer, but flushes it behind the scenes so we've never hit any issues there either.
community.arm.com/.../memory-limits-with-vulkan-on-mali-gpus
This device lost error happens when I turn on terrain, or turn off culling on the terrain. This spike in triangle count going from 200K tris that render fine to 250k tris is when Vulkan returns VK_DEVICE_LOST and a message prior to that about "QueueSignalReleaseImageandroid failed:-4". Looking this up in the Vulkan sources indicates this is tied in with the framebuffer loss, so may be just the first part of the device loss.
So since I don't have a lot to go on, and Validation seems to crash the driver with an unknown symbol. I was able to fix a few validation errors using other non-Mali devices, but this code has mostly been working up until the high polycounts are hit.
1. Chunk up terrain into index chunks that represent spatially close triangles. These can be culled.
2. Copy out indices for each of the specific materials in new chunks (these are a subset of the indices in the original chunk). LODs work the same.
3. Draw each visible chunk with vkDrawIndexedIndirect that correspond with a given material. Disabling this optimization does not prevent the crash.
I read the Mali guide and there's not much to go on there about organizing vb or ib data. In general, iOS doesn't even recommend anything like repacking. Pete Harris had mention that Bifrost copies the entire min/maxIndex range of vertices, and Valhall copies on the visible/backfaced triangle vertices. So Vallhal gets around 50% more out of the same parameter buffer if half the triangles are backfacing.
With things moving towards mesh shaders and meshlets like in UE5, I was considering repacking/reordering/splitting up our vertex buffers so that each of the indices is an incrementing sequence mostly and the range is as tight as possible. I could even see if these are small enough, that 8-bit indices would suffice. But in step 3, we may pass say 100 of 200 index chunks to the driver that reference a single vb. I understand that within one index range (indexStart, indexCount) all verts are transformed, but if those 100 index chunks reference half the buffer, will only that half be allocated to the parameter buffer. LODs could be packed smallest to largest by appending the unique vertices to the end from the larger LODs.
I could use some info on allocation with DrawIndirect calls vs. regular draws. I have DrawIndirect disabled for now.
No real info to go on. robustBufferAccess didn't catch anything. And nothing is reported when the device is lost even with validation enabled. Validation seems to seg-fault when I use debug markers/groups around the pass, so I've disabled markers when validation is on.
I can strip the two terrain shaders (and two variants of those) down to not using any varyings. If I only write out white from the fragment shader, then the device isn't lost. I can render 450K polys no problem on the same Mali device, though typically it's around 200K total polys. I removed all half usage, and that didn't work either. I also switched from uint32 to uint16 indices, but again that didn't help. The varying memory would be tied to the vertices anyways. I haven't yet repacked the vertices.
D/mali.instrumentation.graph.work: key already added <- see a ton of these every frame
This is right before the DEVICE_LOSTE/vulkan: QueueSignalReleaseImageANDROID failed: -4
E/CRASH: ASSERT! Foo.cpp (2888): Renderer Crash, Error: ERROR_DEVICE_LOST, exiting app.. Run 'make callstack' to see the symbolicated crash callstack
Also Mali seems to require VkPhysicalDeviceFloat16Int8FeaturesKHR setup for half shaders, where other platforms don't have this requirement. I don't know if that means the other platforms are not running the half code for the sahders.