Hi there!
I'm kinda new to the OpenGL land, so this might just be a stupid mistake. First of all, I need my shaders to cross-compile with HLSL, so there is some macro-foo in my shaders, but I don't think these are getting me into this trouble here.
We didn't get the shaders to run properly on AMD-Hardware (Vertices messed up), but only when I accessed some varyings for Texture-Coordinates and Normals. However, I've come to a similar problem on my Nvidia GTX570, which seems really strange.
The following happens:
I have this vertex-shader here (stripped some #defines and stuff, but you should get the idea):
// [...]
attribute float3 in_Position;
attribute float3 in_Normal;
attribute float2 in_TexCoord;
attribute float2 in_LightmapCoord;
uniform float4x4 Model;
uniform float4x4 View;
uniform float4x4 Projection;
uniform float4x4 ModelViewProj;
uniform float2 LightmapPos;
uniform float2 LightmapPatchSize;
varying float3 v_WorldPos;
varying float3 v_Normal;
varying float2 v_TexCoord;
varying float2 v_LightmapCoord;
void main()
{
v_WorldPos = mul(float4(in_Position.xyz, 1.0f), Model);
gl_Position = mul(float4(in_Position.xyz, 1.0f), ModelViewProj);
//v_Normal = in_Normal;
v_Normal= float3(0.0f, 0.0f, 0.1f);
v_TexCoord = IN_TexCoord;
v_LightmapCoord = in_LightmapCoord * LightmapPatchSize + LightmapPos;
}
This shader messes up the texture-coordinates (Both v_TexCoord and v_LightmapCoord) in the fragment shader, but only, when I assign the float3(0.0f, 0.0f, 0.1f); to v_Normal. When I uncomment the line above instead, they are working just fine.
So it comes down to this:
v_Normal = IN_Normal; // Works fine
v_Normal= float3(0.0f, 0.0f, 0.1f); // Messes up texture-coordinates
Can someone please explain what is happening there? I would really appreciate a solution, since I have no idea what is going wrong there...
If possible can you post a complete minimal example which we can compile and run, as well as some screenshots of what the problem is? It's really hard to debug graphics problems without pictures or an executable example - hard to know what is going wrong =)
Cheers, Pete
Hi degenerated,
if I had to guess based on the info we have so far I'd investigate whether the fragment shader code expects the normal to be "normalised". That is, the "length" of the normal vector is 1.0
The length is defined as the square root of the sum of the square of each element in the vector.
There's a good page explaining it here http://www.fundza.com/vectors/normalize/
So your test vector of (0.0, 0.0, 0.1) is not normalised (length != 1.0)
The normalised version of that vector would be (0.0, 0.0, 1.0) so you could try that and see if the fragment shader behaves as expected then?
According the the Embedded Systems Shader Language (ESSL) spec here http://www.khronos.org/files/opengles_shading_language.pdf
there is also a normalize() function built in to the language so you could also try:
v_Normal= normalize(float3(0.0f, 0.0f, 0.1f));
though beware that operation isn't computationally cost-free so consider where you really need to use it.
HTH, Pete
Hi Pete,
It fails with float3(0.0,0.0,1.0), too.
Anyways,hardcoding the normal value (which isn't used in the pixel shader) shouldn't corrupt the data for the texture coordinates, right?
I have yet to get this to break in a simple example, but I'm on it.
Edit: I think I've solved it. The GLSL-Compiler drops the normals-attribute when not using them, and so the normals get assigned to the texcoords instead. Still have to learn a lot about OpenGL I guess, but thanks for the help anyways
Were you using gl[Bind|Get]AttribLocation to get/set the handle of the attribute in the shader? glGetAttribLocation after linking will give you the correct handle to the attribute, or -1 if it has been compiled out. If you were assuming the handles of the attributes, then that explains it
Thanks,
Chris
you're right it shouldn't corrupt the texture coordinate data - I was assuming (incorrectly it seems) that the texture coordinates were being modified wrongly by offsets calculated from buggy normal values. Without the fragment shader source that was my best guess :-)
Glad you made progress, as Chris says attribute location seems a likely cause - I've been caught out by the same issue when the compiler spots a uniform or attribute isn't actually used and optimizes it away...
I've been caught out by the same issue when the compiler spots a uniform or attribute isn't actually used and optimizes it away...
Not optimal, but a conservative way of doing this might be:
for each (attribute) { GLuint attribLoc = glGetAttribLocation(attribute); if (attribLoc != -1) glVertexAttribPointer(blah blah); }
Attrib data always gets pushed to the right place, or not at all if optimized out.
Yeah, that sort of what I did last night to fix the issue. Now it's working fine!
Thanks for all your support!