I just lost a few hours trying to play with the Z index between draw calls, in order to try Z-layering, as advised by peterharris in my question For binary transparency : Clip, discard or blend ?.
However, for reasons I did not understand, the Z layer seemed to be completely ignored. Only the glDraw calls order was taken into account.
I really tried everything :
glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LESS); glDepthMask(GL_TRUE); glClearDepthf(1.0f); glDepthRangef(0.1f, 1.0f); glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );
Still... each glDrawArrays drew pixels over previously drawn pixels that had an inferior Z value. Switched the value provided to glDepthFunc, switched the Z values, ... same result.
I really started to think that Z-layering only worked for one draw batch...
Until, I searched the OpenGL wiki for "Depth Buffer" informations and stumbled upon Common Mistakes : Depth Testing Doesn't Work :
Assuming all of that has been set up correctly, your framebuffer may not have a depth buffer at all. This is easy to see for a Framebuffer Object you created. For the Default Framebuffer, this depends entirely on how you created your OpenGL Context.
"... Not again ..."
After a quick search for "EGL Depth buffer" on the web, I found the EGL manual page : eglChooseConfig - EGL Reference Pages, which stated this :
EGL_DEPTH_SIZE Must be followed by a nonnegative integer that indicates the desired depth buffer size, in bits. The smallest depth buffers of at least the specified size is preferred. If the desired size is zero, frame buffer configurations with no depth buffer are preferred. The default value is zero. The depth buffer is used only by OpenGL and OpenGL ES client APIs.
EGL_DEPTH_SIZE
Must be followed by a nonnegative integer that indicates the desired depth buffer size, in bits. The smallest depth buffers of at least the specified size is preferred. If the desired size is zero, frame buffer configurations with no depth buffer are preferred. The default value is zero.
The depth buffer is used only by OpenGL and OpenGL ES client APIs.
Adding EGL_DEPTH_SIZE, 16 in the configuration array provided to EGL solved the problem.
I should have known.
Indeed, in the shield example of your blog post, it seems a good idea to split simple 2D model into two parts. For animated character sprites, this might need texture atlas with twice the size, though. (Character main parts / Character outer-parts with invisible sides).
Before going into that route, there's still something I'll like to experiment with pixel sprites.
Is it fine, performance wise, to draw everything in a very small framebuffer (like 320x176 pixels) and then apply this framebuffer on a "screen-size" (1080p) quad ? Or would the texture stretching cause more stress than just drawing directly on the screen ?
It's going to depend how big your sprites are on screen.
Vertices have a processing and bandwidth cost, but using triangles means you can more closely follow the outline of the character to throw away totally transparent parts (e.g. see the shield example in my blog - you couldn't follow the outline of the shield using a point sprite).
If your characters are small (in terms of pixel count) then points are probably fine; if they are large and have significant amounts of totally transparent pixels then you may gain performance / save power by using triangles and the depth testing approach in my blog.
Thanks ! Though, now I'm wondering if I should continue to use point sprites or go for triangles for the characters sprites .
Glad you got it working