So I'm trying to create a Multisampled depth texture and I'm seeing some oddities.
First off, this GLES2-like call works :
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, NULL );
But then I can't use it for the multisampled texture :
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_DEPTH_COMPONENT, Width, Height, GL_TRUE );
It throws GL_INVALID_ENUM and reading the docs it can only be from texture target which is ok so the next wrong thing is GL_DEPTH_COMPONENT .
Then I tried creating GL_DEPTH_COMPONENT16,24 and 32 textures
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, NULL );
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL );
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
And while they all work I don't see my depth buffer based effects like depth of field so I can only assume the values are either not being saved (all 0s) or the format is wrong, like I would need an integer sampler. I am expecting in these scenarios to use a standard sampler2D and get a .r-only texture with values from 0..1 . I have the exact same issue with Qualcomm's Adreno 320, but I don't have it with Apple's A7 GPU or my AMD R9 280X.
Am I doing something wrong ?
Yes, you are correct, I think my eyes flew somewhere as it was 6 AM when I was doing that. Here's some fresh data :
However, I can't use GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, or GL_DEPTH_COMPONENT32F on the resolve texture with glTexImage2D because then I get just a black texture when using it in a shader, as I mentioned above ( in the post from Dec 13, 2014 4:14 PM )
EDIT : I now have tried using GL_DEPTH_COMPONENT for the resolve texture and GL_DEPTH_COMPONENT16 for the multisampled texture and it seems to (finally) work. So far I tried avoiding having different internal formats for these.
Glad you have it working, can you let us know the API calls you are making for the resolve texture using GL_DEPTH_COMPONENT? Just trying to tie up my understanding on this issue (I think it's just the API being messy regarding format/internalformat)
Cheers,
Chris
For the resolve textures it's now
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, Width, Height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
and for the multisampled textures it's
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_DEPTH_COMPONENT16, Width, Height, GL_TRUE );
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA8, Width, Height, GL_TRUE );
the issue was that I can't use GL_DEPTH_COMPONENT or GL_DEPTH_COMPONENT16 in both places at the same time. glTexImage2D works with just GL_DEPTH_COMPONENT as intenalformat while glTexStorage2DMultisample works with only GL_DEPTH_COMPONENT16-32.
I can also confirm that multisampled textures work in shaders as expected, as I was trying through this to manually resolve the depth buffer for a follow-up use in post processing effects since glBlitFramebuffer doesn't resolve the depth buffer on any GLES3+ implementation I tried.
Can you point me to the page where depth resolving is disallowed ? I already read the page you pointed to like 2 times and there's barely anything on resolving multisampled renderbuffers/multisampled textures. This is probably the only relevant part :
"If the value of SAMPLE_BUFFERS for the read framebuffer is one and the value
of SAMPLE_BUFFERS for the draw framebuffer is zero, the samples corresponding
to each pixel location in the source are converted to a single sample before being
written to the destination. The filter parameter is ignored.. If the source formats are depth values, sample values are resolved in an implementationdependent
manner where the result will be between the minimum and maximum depth values in the pixel."
It sounds to me like "implementation dependent" can also be like not writing to it, or writing 0, and when resolving samples the filter is completely ignored, so I dunno what you tried to say in your argument (as a side note, I always have GL_NEAREST and the width and height always match).
since glBlitFramebuffer doesn't resolve the depth buffer on any GLES3+ implementation
It's not an implementation issue, the specification explicitly disallows this. The only supported filtering for depth and/or stencil blits is GL_NEAREST (see page 196 in the OpenGL ES 3.0.3 specification).
*EDIT* No it doesn't - sorry - see below!
As you have found, the workaround is to load these as a normal texture and resolve using a sampler filter.
HTH, Pete
Yes, looks like you're right sorry - I read the single sampled bit of the spec by mistake (oops). You are correct, glBlitFramebuffer does allow a multi-sample resolve of depth - it's just implementation defined what you get (and filter must be set to GL_NEAREST in either case).
It sounds to me like "implementation dependent" can also be like not writing to it, or writing 0,
No - the bit after that in the spec is clear about what must happen - "where the result will be between the minimum and maximum depth values in the pixel" - you just have no guarantee that different implementations will do the same thing.
In terms of making this work I'm still a little unsure how GLES handles trying to blit a sized DEPTH16 into an unsized DEPTH texture. What happens if you allocate a DEPTH_COMPONENT16 via glTexStorage2D for the resolve surface, rather than a simple DEPTH via glTexImage?
Sorry for the confusion,
Pete
I just tried that, it doesn't get resolved. But the issue is a little depper due to the separate issue I reported above which is that creating a sized depth texture using either glTexImage or glTexStorage2D like :
glTexStorage2D( GL_TEXTURE_2D, 1, GL_DEPTH_COMPONENT16, Tex->Info.Width, Tex->Info.Height );
results in an adequate color buffer (so depth testing works on it) but subsequent usage of it in a shader (for let's say DOF) makes it return 0 for every pixel. This is without any antialiasing and resolve. I did try to resolve AA to it but it doesn't work.
In the case where you are reading black from the sized DEPTH_COMPONENT16 resolved render buffer, what texture filter are you using? Note that the sized depth formats only support GL_NEAREST filtering (see section 8.16 Texture Completeness in the OpenGL ES 3.1 specification):
Using the preceding definitions, a texture is complete unless any of the following conditions hold true: <snip> The effective internal format specified for the texture arrays is a sized internal depth or depth and stencil format (see table 8.14), the value of TEXTURE_COMPARE_MODE is NONE , and either the magnification filter is not NEAREST or the minification filter is neither NEAREST nor NEAREST_MIPMAP_NEAREST.
Using the preceding definitions, a texture is complete unless any of the following conditions hold true:
<snip>
If you violate any of the conditions in section 8.16 then the texture is classified as incomplete, and rendering is undefined - our driver will default to returning a black value in this case.
Oh, wow, I had no idea there's such a restriction on sized depth formats in ES. I just tested setting GL_NEAREST as the min/mag filter for all depth textures and now I see the depth effects properly which means it's fixed. I also tested on my Nexus 4 where I had the same "problem" and now it works there as well, cool !
Resolving AA depth though still doesn't work, tried with and without glTexStorage2D
Hi,
You stated that this works:
Specifically saying:
glTexImage2D works with just GL_DEPTH_COMPONENT as intenalformat
This doesnt seem correct to me, as it appears to be out of spec. It shouldnt work, and if it does, this could be a bug.
The spec states for glTexImage2D, you have to specify GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, or GL_DEPTH_COMPONENT32F for the internalformat, NOT GL_DEPTH_COMPONENT.
https://www.khronos.org/opengles/sdk/docs/man3/html/glTexImage2D.xhtml
Regards,
Michael McGeagh
Yes, but since GLES2 uses the same glTexImage2D and because there is just GL_DEPTH_COMPONENT in GLES2 then it should work since GLES3 is backwards compatible, is it not ?
Very good point. The man pages should probably state that a bit more clearly.
Is it possible that you could provide us with a minimal reproducer for this issue? There isn't anything obviously wrong with your API usage at this point, so would help if we could do some analysis on the API trace.
Thanks,
Well I just have my benchmark app ( Relative Benchmark ) with which I could reproduce that but It's fairly complex and has a lot of draw calls and passes. I still have some visual errors when toggling features on and off though, like if you turn Motion Blur on high on and then off, most objects disappear although the log doesn't contain any errors ( I log most things to the android log including KHR_debug callbacks and calling glGetError after every API call )