So I'm trying to create a Multisampled depth texture and I'm seeing some oddities.
First off, this GLES2-like call works :
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, NULL );
But then I can't use it for the multisampled texture :
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_DEPTH_COMPONENT, Width, Height, GL_TRUE );
It throws GL_INVALID_ENUM and reading the docs it can only be from texture target which is ok so the next wrong thing is GL_DEPTH_COMPONENT .
Then I tried creating GL_DEPTH_COMPONENT16,24 and 32 textures
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, NULL );
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL );
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
And while they all work I don't see my depth buffer based effects like depth of field so I can only assume the values are either not being saved (all 0s) or the format is wrong, like I would need an integer sampler. I am expecting in these scenarios to use a standard sampler2D and get a .r-only texture with values from 0..1 . I have the exact same issue with Qualcomm's Adreno 320, but I don't have it with Apple's A7 GPU or my AMD R9 280X.
Am I doing something wrong ?
since glBlitFramebuffer doesn't resolve the depth buffer on any GLES3+ implementation
It's not an implementation issue, the specification explicitly disallows this. The only supported filtering for depth and/or stencil blits is GL_NEAREST (see page 196 in the OpenGL ES 3.0.3 specification).
*EDIT* No it doesn't - sorry - see below!
As you have found, the workaround is to load these as a normal texture and resolve using a sampler filter.
HTH, Pete
Can you point me to the page where depth resolving is disallowed ? I already read the page you pointed to like 2 times and there's barely anything on resolving multisampled renderbuffers/multisampled textures. This is probably the only relevant part :
"If the value of SAMPLE_BUFFERS for the read framebuffer is one and the value
of SAMPLE_BUFFERS for the draw framebuffer is zero, the samples corresponding
to each pixel location in the source are converted to a single sample before being
written to the destination. The filter parameter is ignored.. If the source formats are depth values, sample values are resolved in an implementationdependent
manner where the result will be between the minimum and maximum depth values in the pixel."
It sounds to me like "implementation dependent" can also be like not writing to it, or writing 0, and when resolving samples the filter is completely ignored, so I dunno what you tried to say in your argument (as a side note, I always have GL_NEAREST and the width and height always match).
Yes, looks like you're right sorry - I read the single sampled bit of the spec by mistake (oops). You are correct, glBlitFramebuffer does allow a multi-sample resolve of depth - it's just implementation defined what you get (and filter must be set to GL_NEAREST in either case).
It sounds to me like "implementation dependent" can also be like not writing to it, or writing 0,
No - the bit after that in the spec is clear about what must happen - "where the result will be between the minimum and maximum depth values in the pixel" - you just have no guarantee that different implementations will do the same thing.
In terms of making this work I'm still a little unsure how GLES handles trying to blit a sized DEPTH16 into an unsized DEPTH texture. What happens if you allocate a DEPTH_COMPONENT16 via glTexStorage2D for the resolve surface, rather than a simple DEPTH via glTexImage?
Sorry for the confusion,
Pete
I just tried that, it doesn't get resolved. But the issue is a little depper due to the separate issue I reported above which is that creating a sized depth texture using either glTexImage or glTexStorage2D like :
glTexStorage2D( GL_TEXTURE_2D, 1, GL_DEPTH_COMPONENT16, Tex->Info.Width, Tex->Info.Height );
results in an adequate color buffer (so depth testing works on it) but subsequent usage of it in a shader (for let's say DOF) makes it return 0 for every pixel. This is without any antialiasing and resolve. I did try to resolve AA to it but it doesn't work.
In the case where you are reading black from the sized DEPTH_COMPONENT16 resolved render buffer, what texture filter are you using? Note that the sized depth formats only support GL_NEAREST filtering (see section 8.16 Texture Completeness in the OpenGL ES 3.1 specification):
Using the preceding definitions, a texture is complete unless any of the following conditions hold true: <snip> The effective internal format specified for the texture arrays is a sized internal depth or depth and stencil format (see table 8.14), the value of TEXTURE_COMPARE_MODE is NONE , and either the magnification filter is not NEAREST or the minification filter is neither NEAREST nor NEAREST_MIPMAP_NEAREST.
Using the preceding definitions, a texture is complete unless any of the following conditions hold true:
<snip>
If you violate any of the conditions in section 8.16 then the texture is classified as incomplete, and rendering is undefined - our driver will default to returning a black value in this case.
Oh, wow, I had no idea there's such a restriction on sized depth formats in ES. I just tested setting GL_NEAREST as the min/mag filter for all depth textures and now I see the depth effects properly which means it's fixed. I also tested on my Nexus 4 where I had the same "problem" and now it works there as well, cool !
Resolving AA depth though still doesn't work, tried with and without glTexStorage2D