So I'm trying to create a Multisampled depth texture and I'm seeing some oddities.
First off, this GLES2-like call works :
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, NULL );
But then I can't use it for the multisampled texture :
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_DEPTH_COMPONENT, Width, Height, GL_TRUE );
It throws GL_INVALID_ENUM and reading the docs it can only be from texture target which is ok so the next wrong thing is GL_DEPTH_COMPONENT .
Then I tried creating GL_DEPTH_COMPONENT16,24 and 32 textures
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, NULL );
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL );
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
And while they all work I don't see my depth buffer based effects like depth of field so I can only assume the values are either not being saved (all 0s) or the format is wrong, like I would need an integer sampler. I am expecting in these scenarios to use a standard sampler2D and get a .r-only texture with values from 0..1 . I have the exact same issue with Qualcomm's Adreno 320, but I don't have it with Apple's A7 GPU or my AMD R9 280X.
Am I doing something wrong ?
Using assembly language cross analysis
I don't have time to look at this in any detail now, but in terms of getting better understanding of the error messages note that our recent driver releases support the KHR_debug extension, which generally provides much more "human readable" error messages than the return codes.
HTH, Pete
Having not used this before I don't have any kind of framework set up right now to test what I'm about to tell you, but according to the official spec, the format argument of glTexStorage2DMultisample is a sized internal format. As such you've already made the correct logical leap to using GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, or GL_DEPTH_COMPONENT32F in your glTexStorage2DMultisample call.
It sounds like you already worked that part out, but that's the only advice I can give based on this tiny snippet. To know why nothing is showing up I'd have to see more code around what you're doing with the buffers to know what you'd expect to see.
-Stacy
I've implemented the GL_KHR_debug extension as follows :
PFNGLDEBUGMESSAGECALLBACKPROC glDebugMessageCallback = (PFNGLDEBUGMESSAGECALLBACKPROC)eglGetProcAddress("glDebugMessageCallback");
PFNGLDEBUGMESSAGECONTROLPROC glDebugMessageControl = (PFNGLDEBUGMESSAGECONTROLPROC)eglGetProcAddress("glDebugMessageControl");
if ( glDebugMessageCallback != NULL && glDebugMessageControl != NULL )
{
glEnable( GL_DEBUG_OUTPUT );
glDebugMessageCallback( &GLDebugCallback, NULL );
glDebugMessageControl( GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, NULL, GL_TRUE );
glEnable( GL_DEBUG_OUTPUT_SYNCHRONOUS );
}
I'm now being told
12-13 14:37:40.628: I/com.re3.benchmark(20517): GLDebugCallback Source=OpenGL Type=Error Severity=high ID=33350 Message=Error:glTexStorage2DMultisample::<internalformat> is not an accepted value
Just as I already figured. Interestingly enough, my other issues is now gone, as in it doesn't throw GL_OUT_OF_MEMORY but some objects are still rendered incorrectly. I'm thinking to dump the depth buffer to a file and see what values it has since I don't see my effects correctly. I was thinking though that perhaps the depth buffer is saved in -1..1 for DEPTH24/32 instead of 0..1 like I'm expecting it, or maybe the 32 bit value is spread throughout the RGBA8 components needing to basically fetch the Pixel.rgba and somehow make a float value out of the 4 sub-values ?
EDIT : It seems like glReadPixels doesn't work for GL_DEPTH_COMPONENT at least on GLES 3.1 so I can't get the depth buffer to actually look at the values. I'm left to just visual debugging and trying to figure out what kind of values it has...
EDIT2 : I just made a debug shader and can confirm that the r,g,b components are 0 while a = 1 (for a DEPTH32F texture with GL_FLOAT ). I also tried doing pixel = pow(pixel,5) to see if there's anything else besides white on alpha, doesn't look like there's anything.
So, after these edits, where are you now at?
Have you successfully bound the buffer but can't write into it?
Or is it drawing to them without error but simply showing nothing in the associated texture?
Just trying to figure out what the next step is.
I can bind the depth buffer to be used in a framebuffer, the color buffer looks ok as in depth testing/writting is performed but accessing the buffer as a texture in a fragment shader results in black being returned.
Did anyone go in depth with this ? I now get even more odd results (I did change some shaders and the draw call contents may be a bit different now) when I'm just using multisampled textures for antialiasing. What I basically do is the same thing, I create color and depth multisampled textures with 4 samples like :
and I get :
My debug message : Calling glTexStorage2DMultisample TextureTarget=37120 GLInternalFormat=6402 ( 6402 is GL_RGBA, I tried that and also GL_RGBA8, same result )
KHR_debug callback : Source=OpenGL Type=Error Severity=high ID=33350 Message=Error:glTexStorage2DMultisample::<internalformat> is not an accepted value
01-19 04:18:43.778: I/com.re3.benchmark(6877): GLDebugCallback Source=OpenGL Type=Error Severity=high ID=33350 Message=Error:glClear::currently bound framebuffer is not valid for this operation
So basically I can't clear a multisampled texture ? odd
My app will be updated soon here ( RE3 Benchmark ) with code that uses multisampled textures on just GL ES 3.1 (and renderbuffers otherwise). There's some other draw errors when using motion blur that could be investigated.
> EDIT : It seems like glReadPixels doesn't work for GL_DEPTH_COMPONENT at least on GLES 3.1 so I can't get the depth buffer to actually look at the values. I'm left to just visual debugging and trying to figure out what kind of values it has...
Correct - glReadPixels only works for color buffers. You can always write a shader which dumps the depth value out to an RGBA target.
> EDIT2 : I just made a debug shader and can confirm that the r,g,b components are 0 while a = 1 (for a DEPTH32F texture with GL_FLOAT ). I also tried doing pixel = pow(pixel,5) to see if there's anything else besides white on alpha, doesn't look like there's anything.
Only the r component should contain anything in a depth texture, as it is a single channel format.
> DEPTH24/32 instead of 0..1 like I'm expecting it, or maybe the 32 bit value is spread throughout the RGBA8 components needing to basically fetch the Pixel.rgba and somehow make a float value out of the 4 sub-values ?
Depth stores between 0 and 1, but it isn't a linear spread of bits; it's stored in a logarithmic-type scale. Most values will be close to 1 because of this. Also remember that depth generally is a 24-bit format (for a normal D24/S8 buffer), so you'll need to load into a highp variable if you want to avoid losing precision.
There are plenty of examples of the web explaining how to linearize a depth value if you want a linear representation.
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_DEPTH_COMPONENT, Width, Height, GL_TRUE ); and I get : My debug message : Calling glTexStorage2DMultisample TextureTarget=37120 GLInternalFormat=6402 ( 6402 is GL_RGBA, I tried that and also GL_RGBA8, same result ) KHR_debug callback : Source=OpenGL Type=Error Severity=high ID=33350 Message=Error:glTexStorage2DMultisample::<internalformat> is not an accepted value
In the first line you pass GL_DEPTH_COMPONENT as the internal format, which isn't a valid internal format so I understand why that would fail. From a glance at https://www.khronos.org/opengles/sdk/docs/man31/html/glTexStorage2DMultisample.xhtml, it needs to be GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, or GL_DEPTH_COMPONENT32F. Also 6402 (0x1902) is GL_DEPTH_COMPONENT, not GL_RGBA. Try one of the valid sized internal formats and it should work.
Hth,
Chris
Is my app too complex of an example ? You just need to run it and tap the Antialiasing button and the bug will manifest itself (and spam the log)
As a side note, I tried making multisampled textures with GL_RGBA16UI and that works fine, GL_RGBA/GL_RGBA8 doesn't.
Did anyone go in depth with this ?
It's very hard to go in depth when you don't provide any examples of what is actually wrong. Debugging anything from a pile of text partially explaining a problem is impossible.
So basically I can't clear a multisampled texture ?
*EDITED* Based on your post the error for the clear happens after the error for texture storage, so I would guess your frame-buffer is failing a completeness check which makes it invalid for any rendering.
There's some other draw errors when using motion blur that could be investigated.
We're happy to help debug issues which look like Mali bugs, and we're generally happy to help on more generic graphics issues if you can provide a complete and specific example of what you have tried and where it is going wrong. We can't really help on generic application debug, sorry.
Cheers,
Pete
Based on the replies above and below we're trying to help you with the multi-sampling issue (as that is a specific problem). I was referring to the "other draw errors when using motion blur that could be investigated" - there are no details in your post about how the blur is supposed to work, what it is supposed to look like, what it is actually rendering, etc.
Most of us donate our spare time to answer questions in the forums - so we really don't want to spend an hour to reverse engineer a specific behavior out of a whole application, and which point we guess the problem it, only to find out later we guessed wrong. Please be specific about any issues you raise - it makes it much more likely we're able to help.
Kind regards, Pete
Yes, you are correct, I think my eyes flew somewhere as it was 6 AM when I was doing that. Here's some fresh data :
However, I can't use GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, or GL_DEPTH_COMPONENT32F on the resolve texture with glTexImage2D because then I get just a black texture when using it in a shader, as I mentioned above ( in the post from Dec 13, 2014 4:14 PM )
EDIT : I now have tried using GL_DEPTH_COMPONENT for the resolve texture and GL_DEPTH_COMPONENT16 for the multisampled texture and it seems to (finally) work. So far I tried avoiding having different internal formats for these.
Glad you have it working, can you let us know the API calls you are making for the resolve texture using GL_DEPTH_COMPONENT? Just trying to tie up my understanding on this issue (I think it's just the API being messy regarding format/internalformat)
For the resolve textures it's now
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, Width, Height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
and for the multisampled textures it's
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_DEPTH_COMPONENT16, Width, Height, GL_TRUE );
glTexStorage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA8, Width, Height, GL_TRUE );
the issue was that I can't use GL_DEPTH_COMPONENT or GL_DEPTH_COMPONENT16 in both places at the same time. glTexImage2D works with just GL_DEPTH_COMPONENT as intenalformat while glTexStorage2DMultisample works with only GL_DEPTH_COMPONENT16-32.
I can also confirm that multisampled textures work in shaders as expected, as I was trying through this to manually resolve the depth buffer for a follow-up use in post processing effects since glBlitFramebuffer doesn't resolve the depth buffer on any GLES3+ implementation I tried.