Hi. We are experiencing unexpected depth buffer behaviour when setting glDepthRange with equal min and max values.In the below example a single quad is rendererone trace set glDepthRangef(0.49, 0.5) which produces expected resultsanother trace set glDepthRangef(0.5, 0.5) which produces unexpected resultsthe graphic analyzer traces can be found at https://drive.google.com/drive/folders/1e_oDplD3EyXENUuVzsrnEi17e-T_CUfC?usp=sharingGL_VENDOR = ARM, GL_RENDERER = Mali-G710, GL_VERSION = OpenGL ES 3.2 v1.r38p1-01eac0.55eb2d40cce8f18c0f57f61c686a946fResult of single quad rendering with glDepthRangef(0.49, 0.5)
Result of single quad rendering (same state, uniforms, ....) with glDepthRangef(0.5, 0.5)
Also worth noting that this issue is not reproducible on other GPU's. And the question is - whether this a known issue and what are the recommended workarounds?Thank you in advance, Aleksei
Thanks for the bug report.
This smells like a precision issue in our implementation somewhere, but will need to check. If it's a precision issue, I expect what you are doing (moving min or max slightly to increase the delta) is about as good as workarounds are going to get.
Cheers, Pete
One minor footnote looking at the shader. I don't think it's causing this problem, but is there a reason you are currently implementing the gl_Position divide by W in the shader code? The hardware will do the clip-space to normalized-device conversion automatically, so doing it manually in shader code isn't necessary. It will definitely cost some performance and may also introduce some additional loss of precision.
It happens also without the divide. The divide was actually introduced during debug to see, whether it could possibly mitigate the issue, but same effect - results is the same no matter whether we divide or not.
Just to confirm - we're reproduced the issue, and your existing workaround (making the min-max diff a small amount above zero) is what we would recommend as a software fix.
Thx for the confirmation!