Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
Mobile, Graphics, and Gaming blog Depth testing : Context is everything
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • egl
  • gles
  • context
  • opengl es2
  • opengl_es_2.0
  • depth
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Depth testing : Context is everything

Myy
Myy
June 13, 2016
1 minute read time.

I just lost a few hours trying to play with the Z index between draw calls, in order to try Z-layering, as advised by peterharris in my question For binary transparency : Clip, discard or blend ?.

However, for reasons I did not understand, the Z layer seemed to be completely ignored. Only the glDraw calls order was taken into account.

I really tried everything :

glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glDepthMask(GL_TRUE);
glClearDepthf(1.0f);
glDepthRangef(0.1f, 1.0f);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );

Still... each glDrawArrays drew pixels over previously drawn pixels that had an inferior Z value. Switched the value provided to glDepthFunc, switched the Z values, ... same result.

I really started to think that Z-layering only worked for one draw batch...

Until, I searched the OpenGL wiki for "Depth Buffer" informations and stumbled upon Common Mistakes : Depth Testing Doesn't Work :

Assuming all of that has been set up correctly, your framebuffer may not have a depth buffer at all. This is easy to see for a Framebuffer Object you created. For the Default Framebuffer, this depends entirely on how you created your OpenGL Context.

"... Not again ..."

After a quick search for "EGL Depth buffer" on the web, I found the EGL manual page :  eglChooseConfig - EGL Reference Pages, which stated this :

EGL_DEPTH_SIZE

    Must be followed by a nonnegative integer that indicates the desired depth buffer size, in bits. The smallest depth buffers of at least the specified size is preferred. If the desired size is zero, frame buffer configurations with no depth buffer are preferred. The default value is zero.

    The depth buffer is used only by OpenGL and OpenGL ES client APIs.

The solution

Adding EGL_DEPTH_SIZE, 16 in the configuration array provided to EGL solved the problem.

I should have known.

Anonymous
  • Peter Harris
    Peter Harris over 9 years ago

    Yes, that's the basic idea.

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
  • Myy
    Myy over 9 years ago

    So, one big square outline using a set of TRIANGLES ?

    Something like this wonderful work of art ?

    meshcut.png
    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
  • Peter Harris
    Peter Harris over 9 years ago

    Yep, exactly; one outline for the opaque mesh, and one for the partially transparent stuff. As per the blog, my only advice is don't try to make the outline perfect - it would cost too many vertices -  coarse and approximate.

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
  • Myy
    Myy over 9 years ago

    > The texture atlas should be the same size - you just have two different meshes

    Ah ! I didn't think about it. So it boils down to finding an optimized mesh for the non-opaque outline. I'll try that then. Thanks !

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
  • Peter Harris
    Peter Harris over 9 years ago

    > For animated character sprites, this might need texture atlas with twice the size, though.

    The texture atlas should be the same size - you just have two different meshes (one opaque, one non-opaque) to read the relevant pieces out of the atlas.

    > Is it fine, performance wise, to draw everything in a very small framebuffer (like 320x176 pixels) and then apply this framebuffer on a "screen-size" (1080p) quad ? Or would the texture stretching cause more stress than just drawing directly on the screen ?

    It sounds like it would be slower than just rendering it directly.

    Pete

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
>
Mobile, Graphics, and Gaming blog
  • Unlock the power of SVE and SME with SIMD Loops

    Vidya Praveen
    Vidya Praveen
    SIMD Loops is an open-source project designed to help developers learn SVE and SME through hands-on experimentation. It offers a clear, practical pathway to mastering Arm’s most advanced SIMD technologies…
    • September 19, 2025
  • What is Arm Performance Studio?

    Jai Schrem
    Jai Schrem
    Arm Performance Studio gives developers free tools to analyze performance, debug graphics, and optimize apps on Arm platforms.
    • August 27, 2025
  • How Neural Super Sampling works: Architecture, training, and inference

    Liam O'Neil
    Liam O'Neil
    A deep dive into a practical, ML-powered approach to temporal super sampling.
    • August 12, 2025