So I'm trying to create a Multisampled depth texture and I'm seeing some oddities.
First off, this GLES2-like call works :
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, NULL );
But then I can't use it for the multisampled texture :
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_DEPTH_COMPONENT, Width, Height, GL_TRUE );
It throws GL_INVALID_ENUM and reading the docs it can only be from texture target which is ok so the next wrong thing is GL_DEPTH_COMPONENT .
Then I tried creating GL_DEPTH_COMPONENT16,24 and 32 textures
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, NULL );
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL );
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
And while they all work I don't see my depth buffer based effects like depth of field so I can only assume the values are either not being saved (all 0s) or the format is wrong, like I would need an integer sampler. I am expecting in these scenarios to use a standard sampler2D and get a .r-only texture with values from 0..1 . I have the exact same issue with Qualcomm's Adreno 320, but I don't have it with Apple's A7 GPU or my AMD R9 280X.
Am I doing something wrong ?
Did anyone go in depth with this ? I now get even more odd results (I did change some shaders and the draw call contents may be a bit different now) when I'm just using multisampled textures for antialiasing. What I basically do is the same thing, I create color and depth multisampled textures with 4 samples like :
and I get :
My debug message : Calling glTexStorage2DMultisample TextureTarget=37120 GLInternalFormat=6402 ( 6402 is GL_RGBA, I tried that and also GL_RGBA8, same result )
KHR_debug callback : Source=OpenGL Type=Error Severity=high ID=33350 Message=Error:glTexStorage2DMultisample::<internalformat> is not an accepted value
01-19 04:18:43.778: I/com.re3.benchmark(6877): GLDebugCallback Source=OpenGL Type=Error Severity=high ID=33350 Message=Error:glClear::currently bound framebuffer is not valid for this operation
So basically I can't clear a multisampled texture ? odd
My app will be updated soon here ( RE3 Benchmark ) with code that uses multisampled textures on just GL ES 3.1 (and renderbuffers otherwise). There's some other draw errors when using motion blur that could be investigated.
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_DEPTH_COMPONENT, Width, Height, GL_TRUE ); and I get : My debug message : Calling glTexStorage2DMultisample TextureTarget=37120 GLInternalFormat=6402 ( 6402 is GL_RGBA, I tried that and also GL_RGBA8, same result ) KHR_debug callback : Source=OpenGL Type=Error Severity=high ID=33350 Message=Error:glTexStorage2DMultisample::<internalformat> is not an accepted value
In the first line you pass GL_DEPTH_COMPONENT as the internal format, which isn't a valid internal format so I understand why that would fail. From a glance at https://www.khronos.org/opengles/sdk/docs/man31/html/glTexStorage2DMultisample.xhtml, it needs to be GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, or GL_DEPTH_COMPONENT32F. Also 6402 (0x1902) is GL_DEPTH_COMPONENT, not GL_RGBA. Try one of the valid sized internal formats and it should work.
Hth,
Chris
Is my app too complex of an example ? You just need to run it and tap the Antialiasing button and the bug will manifest itself (and spam the log)
As a side note, I tried making multisampled textures with GL_RGBA16UI and that works fine, GL_RGBA/GL_RGBA8 doesn't.
Did anyone go in depth with this ?
It's very hard to go in depth when you don't provide any examples of what is actually wrong. Debugging anything from a pile of text partially explaining a problem is impossible.
So basically I can't clear a multisampled texture ?
*EDITED* Based on your post the error for the clear happens after the error for texture storage, so I would guess your frame-buffer is failing a completeness check which makes it invalid for any rendering.
There's some other draw errors when using motion blur that could be investigated.
We're happy to help debug issues which look like Mali bugs, and we're generally happy to help on more generic graphics issues if you can provide a complete and specific example of what you have tried and where it is going wrong. We can't really help on generic application debug, sorry.
Cheers,
Pete
Based on the replies above and below we're trying to help you with the multi-sampling issue (as that is a specific problem). I was referring to the "other draw errors when using motion blur that could be investigated" - there are no details in your post about how the blur is supposed to work, what it is supposed to look like, what it is actually rendering, etc.
Most of us donate our spare time to answer questions in the forums - so we really don't want to spend an hour to reverse engineer a specific behavior out of a whole application, and which point we guess the problem it, only to find out later we guessed wrong. Please be specific about any issues you raise - it makes it much more likely we're able to help.
Kind regards, Pete
Yes, you are correct, I think my eyes flew somewhere as it was 6 AM when I was doing that. Here's some fresh data :
However, I can't use GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, or GL_DEPTH_COMPONENT32F on the resolve texture with glTexImage2D because then I get just a black texture when using it in a shader, as I mentioned above ( in the post from Dec 13, 2014 4:14 PM )
EDIT : I now have tried using GL_DEPTH_COMPONENT for the resolve texture and GL_DEPTH_COMPONENT16 for the multisampled texture and it seems to (finally) work. So far I tried avoiding having different internal formats for these.
Glad you have it working, can you let us know the API calls you are making for the resolve texture using GL_DEPTH_COMPONENT? Just trying to tie up my understanding on this issue (I think it's just the API being messy regarding format/internalformat)
For the resolve textures it's now
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, Width, Height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
and for the multisampled textures it's
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_DEPTH_COMPONENT16, Width, Height, GL_TRUE );
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA8, Width, Height, GL_TRUE );
the issue was that I can't use GL_DEPTH_COMPONENT or GL_DEPTH_COMPONENT16 in both places at the same time. glTexImage2D works with just GL_DEPTH_COMPONENT as intenalformat while glTexStorage2DMultisample works with only GL_DEPTH_COMPONENT16-32.
I can also confirm that multisampled textures work in shaders as expected, as I was trying through this to manually resolve the depth buffer for a follow-up use in post processing effects since glBlitFramebuffer doesn't resolve the depth buffer on any GLES3+ implementation I tried.
Can you point me to the page where depth resolving is disallowed ? I already read the page you pointed to like 2 times and there's barely anything on resolving multisampled renderbuffers/multisampled textures. This is probably the only relevant part :
"If the value of SAMPLE_BUFFERS for the read framebuffer is one and the value
of SAMPLE_BUFFERS for the draw framebuffer is zero, the samples corresponding
to each pixel location in the source are converted to a single sample before being
written to the destination. The filter parameter is ignored.. If the source formats are depth values, sample values are resolved in an implementationdependent
manner where the result will be between the minimum and maximum depth values in the pixel."
It sounds to me like "implementation dependent" can also be like not writing to it, or writing 0, and when resolving samples the filter is completely ignored, so I dunno what you tried to say in your argument (as a side note, I always have GL_NEAREST and the width and height always match).
since glBlitFramebuffer doesn't resolve the depth buffer on any GLES3+ implementation
It's not an implementation issue, the specification explicitly disallows this. The only supported filtering for depth and/or stencil blits is GL_NEAREST (see page 196 in the OpenGL ES 3.0.3 specification).
*EDIT* No it doesn't - sorry - see below!
As you have found, the workaround is to load these as a normal texture and resolve using a sampler filter.
HTH, Pete
Yes, looks like you're right sorry - I read the single sampled bit of the spec by mistake (oops). You are correct, glBlitFramebuffer does allow a multi-sample resolve of depth - it's just implementation defined what you get (and filter must be set to GL_NEAREST in either case).
It sounds to me like "implementation dependent" can also be like not writing to it, or writing 0,
No - the bit after that in the spec is clear about what must happen - "where the result will be between the minimum and maximum depth values in the pixel" - you just have no guarantee that different implementations will do the same thing.
In terms of making this work I'm still a little unsure how GLES handles trying to blit a sized DEPTH16 into an unsized DEPTH texture. What happens if you allocate a DEPTH_COMPONENT16 via glTexStorage2D for the resolve surface, rather than a simple DEPTH via glTexImage?
Sorry for the confusion,
I just tried that, it doesn't get resolved. But the issue is a little depper due to the separate issue I reported above which is that creating a sized depth texture using either glTexImage or glTexStorage2D like :
glTexStorage2D( GL_TEXTURE_2D, 1, GL_DEPTH_COMPONENT16, Tex->Info.Width, Tex->Info.Height );
results in an adequate color buffer (so depth testing works on it) but subsequent usage of it in a shader (for let's say DOF) makes it return 0 for every pixel. This is without any antialiasing and resolve. I did try to resolve AA to it but it doesn't work.
In the case where you are reading black from the sized DEPTH_COMPONENT16 resolved render buffer, what texture filter are you using? Note that the sized depth formats only support GL_NEAREST filtering (see section 8.16 Texture Completeness in the OpenGL ES 3.1 specification):
Using the preceding definitions, a texture is complete unless any of the following conditions hold true: <snip> The effective internal format specified for the texture arrays is a sized internal depth or depth and stencil format (see table 8.14), the value of TEXTURE_COMPARE_MODE is NONE , and either the magnification filter is not NEAREST or the minification filter is neither NEAREST nor NEAREST_MIPMAP_NEAREST.
Using the preceding definitions, a texture is complete unless any of the following conditions hold true:
<snip>
If you violate any of the conditions in section 8.16 then the texture is classified as incomplete, and rendering is undefined - our driver will default to returning a black value in this case.
Oh, wow, I had no idea there's such a restriction on sized depth formats in ES. I just tested setting GL_NEAREST as the min/mag filter for all depth textures and now I see the depth effects properly which means it's fixed. I also tested on my Nexus 4 where I had the same "problem" and now it works there as well, cool !
Resolving AA depth though still doesn't work, tried with and without glTexStorage2D
Hi,
You stated that this works:
Specifically saying:
glTexImage2D works with just GL_DEPTH_COMPONENT as intenalformat
This doesnt seem correct to me, as it appears to be out of spec. It shouldnt work, and if it does, this could be a bug.
The spec states for glTexImage2D, you have to specify GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, or GL_DEPTH_COMPONENT32F for the internalformat, NOT GL_DEPTH_COMPONENT.
https://www.khronos.org/opengles/sdk/docs/man3/html/glTexImage2D.xhtml
Regards,
Michael McGeagh
Yes, but since GLES2 uses the same glTexImage2D and because there is just GL_DEPTH_COMPONENT in GLES2 then it should work since GLES3 is backwards compatible, is it not ?