I'm now trying to render to an integer format as attachment 1
GLFormat = GL_RG_INTEGER; GLInternalFormat = GL_RG32UI; Type = GL_UNSIGNED_INT; glTexImage2D( GL_TEXTURE_2D, 0, GLInternalFormat, Tex->Info.Width, Tex->Info.Height, 0, GLFormat, Type, NULL );
I have GL_NEAREST filtering on this texture.
And in the object's shader I have this :
layout(location = 1) out uvec2 VelocityOut; //... VelocityOut = uvec2( vec2( VelocityVertex.xy * vec2(0.5) + vec2(0.5) ) * vec2( 4294967295.0 ) );
VelocityVertex is -1..1 and I'm trying to pack it in 0..4294836225 (allegedly MAX_UINT)
The problem is that I don't get the results back to -1..1 in a following post processing shader where I have :
uniform highp usampler2D Texture1; //...
vec4 curFramePixelVelocity = vec4( texelFetch( Texture1, ivec2( OutTexcoord * Texture0Size ), 0 )) / vec4( 4294967295.0 ); curFramePixelVelocity = curFramePixelVelocity * vec4(2.0) - vec4(1.0);
The exact same shaders give the correct results on my Desktop AMD card. Using a GL_RGBA8 texture attachment that I just convert from -1..1 to 0..1 works as expected but I'm trying to increase the precision by using an integer format.
UPDATE : I think my problem is related to needing to convert to float from UINT32 inside the shader. I just tried using 65535 as the multiplier and with that it works fine. I'm curious what is the maximum value I could use in a shader since it says floats have 23 bits of precision, is 2^23 safe to use ?
UPDATE 2 : Trying to create a GL_RG32UI texture with Samples = 4 results in :
//glTexStorage2DMultisample TextureTarget=37120 Samples=4 GLInternalFormat=33340
GLDebugCallback Source=OpenGL Type=Error Severity=high ID=33350 Message=Error:glTexStorage2DMultisample::invalid number of samples
calling glGetError after glTexStorage2DMultisample returns GL_INVALID_OPERATION
I was looking at glTexStorage2DMultisample - OpenGL ES 3.1 Reference Pages and there is nothing that says integer textures should not support Samples > 1
Can you share a whole shader? It's hard to understand what you're trying to do based on those fragments.
If I had to guess I would think you're precision is too low - have you tried setting the precision for the integer types to highp? Most people forget to make the integer precision change - for fragment shaders the default is mediump for integer - which is int16 (there is no default for float, so float precision must be specified explicitly in the shader).
//glTexStorage2DMultisample TextureTarget=37120 Samples=4 GLInternalFormat=33340 GLDebugCallback Source=OpenGL Type=Error Severity=high ID=33350 Message=Error:glTexStorage2DMultisample::invalid number of samples calling glGetError after glTexStorage2DMultisample returns GL_INVALID_OPERATION
The error message section in the spec for glTexStorage2DMultisample is pretty clear on why this happens:
An INVALID_OPERATION error is generated if samples is greater than the maximum number of samples supported for this target and internalformat. The maximum number of samples supported can be determined by calling GetInternalformativ with a pname of SAMPLES (see section 19.3 of the GLES 3.1 specification).
In general OpenGL ES formats are not required to support multi-sampling at all - and for integer formats it is not easy to determine what it should mean. The algorithmic outputs in integer formats tend to be data rather than color - so the downsample would generally be a pretty meaningless value.
Cheers, Pete
Yeah, I do realize integer multisampling is rather odd. I'm just doing that so I can have multisampled color AND a separate attachment with integer output (for high quality motion blur) to satisfy the multisample condition for MRT. I'm basically using integer formats because of the lack of RGB16F formats for RenderTargets. Thanks for letting me know there's GetInternalformativ in GLES3.0+ I didn't knew the function was implemented there. I thought the standard did support a minimum number of samples for a specific list of formats though, just like in DX, I suppose that's not the case ?
As for int precision, I'll try it but I'm first trying to convert MAX_UINT to a float value and since floats are in 23 bits max on my Nexus 10 I think it overflows and goes into negative values or similar.
I thought the standard did support a minimum number of samples for a specific list of formats though, just like in DX, I suppose that's not the case ?
Not that I've managed to find in the spec yet .
Small addendum for anyone interested in the matter, I scanned the GLES 3.1 spec and discovered these :
GL_MAX_COLOR_TEXTURE_SAMPLES - Min Value 1 (Nexus 10 : 4 )
GL_MAX_DEPTH_TEXTURE_SAMPLES - Min Value 1(Nexus 10 : 4 )
GL_MAX_INTEGER_SAMPLES - Min Value 1(Nexus 10 : 1 )
GL_MAX_FRAMEBUFFER_SAMPLES - Min Value 4
I think integer textures first need to satisfy the GL_MAX_INTEGER_SAMPLES condition and after that the GL_MAX_FRAMEBUFFER_SAMPLES.