So I'm trying to create a Multisampled depth texture and I'm seeing some oddities.
First off, this GLES2-like call works :
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, NULL );
But then I can't use it for the multisampled texture :
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_DEPTH_COMPONENT, Width, Height, GL_TRUE );
It throws GL_INVALID_ENUM and reading the docs it can only be from texture target which is ok so the next wrong thing is GL_DEPTH_COMPONENT .
Then I tried creating GL_DEPTH_COMPONENT16,24 and 32 textures
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, NULL );
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL );
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
And while they all work I don't see my depth buffer based effects like depth of field so I can only assume the values are either not being saved (all 0s) or the format is wrong, like I would need an integer sampler. I am expecting in these scenarios to use a standard sampler2D and get a .r-only texture with values from 0..1 . I have the exact same issue with Qualcomm's Adreno 320, but I don't have it with Apple's A7 GPU or my AMD R9 280X.
Am I doing something wrong ?
I just tried that, it doesn't get resolved. But the issue is a little depper due to the separate issue I reported above which is that creating a sized depth texture using either glTexImage or glTexStorage2D like :
glTexStorage2D( GL_TEXTURE_2D, 1, GL_DEPTH_COMPONENT16, Tex->Info.Width, Tex->Info.Height );
results in an adequate color buffer (so depth testing works on it) but subsequent usage of it in a shader (for let's say DOF) makes it return 0 for every pixel. This is without any antialiasing and resolve. I did try to resolve AA to it but it doesn't work.
In the case where you are reading black from the sized DEPTH_COMPONENT16 resolved render buffer, what texture filter are you using? Note that the sized depth formats only support GL_NEAREST filtering (see section 8.16 Texture Completeness in the OpenGL ES 3.1 specification):
Using the preceding definitions, a texture is complete unless any of the following conditions hold true: <snip> The effective internal format specified for the texture arrays is a sized internal depth or depth and stencil format (see table 8.14), the value of TEXTURE_COMPARE_MODE is NONE , and either the magnification filter is not NEAREST or the minification filter is neither NEAREST nor NEAREST_MIPMAP_NEAREST.
Using the preceding definitions, a texture is complete unless any of the following conditions hold true:
<snip>
If you violate any of the conditions in section 8.16 then the texture is classified as incomplete, and rendering is undefined - our driver will default to returning a black value in this case.
HTH, Pete
Oh, wow, I had no idea there's such a restriction on sized depth formats in ES. I just tested setting GL_NEAREST as the min/mag filter for all depth textures and now I see the depth effects properly which means it's fixed. I also tested on my Nexus 4 where I had the same "problem" and now it works there as well, cool !
Resolving AA depth though still doesn't work, tried with and without glTexStorage2D