This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

OpenGL ES 2.0 Emulator: Strange behavior when using a varying

Hi there!

I'm kinda new to the OpenGL land, so this might just be a stupid mistake. First of all, I need my shaders to cross-compile with HLSL, so there is some macro-foo in my shaders, but I don't think these are getting me into this trouble here.

We didn't get the shaders to run properly on AMD-Hardware (Vertices messed up), but only when I accessed some varyings for Texture-Coordinates and Normals. However, I've come to a similar problem on my Nvidia GTX570, which seems really strange.

The following happens:

I have this vertex-shader here (stripped some #defines and stuff, but you should get the idea):

// [...]

attribute float3 in_Position;

attribute float3 in_Normal;

attribute float2 in_TexCoord;

attribute float2 in_LightmapCoord;

uniform float4x4 Model;

uniform float4x4 View;

uniform float4x4 Projection;

uniform float4x4 ModelViewProj;

uniform float2 LightmapPos;

uniform float2 LightmapPatchSize;

varying float3 v_WorldPos;

varying float3 v_Normal;

varying float2 v_TexCoord;

varying float2 v_LightmapCoord;

// [...]

void main()

{ 

    v_WorldPos = mul(float4(in_Position.xyz, 1.0f), Model);

    gl_Position = mul(float4(in_Position.xyz, 1.0f), ModelViewProj);

    //v_Normal = in_Normal;

    v_Normal= float3(0.0f, 0.0f, 0.1f);

 

    v_TexCoord = IN_TexCoord;

    v_LightmapCoord = in_LightmapCoord * LightmapPatchSize + LightmapPos;

}


This shader messes up the texture-coordinates (Both v_TexCoord and v_LightmapCoord) in the fragment shader, but only, when I assign the float3(0.0f, 0.0f, 0.1f); to v_Normal. When I uncomment the line above instead, they are working just fine.



So it comes down to this:

v_Normal = IN_Normal;  // Works fine

v_Normal= float3(0.0f, 0.0f, 0.1f); // Messes up texture-coordinates

Can someone please explain what is happening there? I would really appreciate a solution, since I have no idea what is going wrong there...

Parents
  • Hi Pete,

    It fails with float3(0.0,0.0,1.0), too.

    Anyways,hardcoding the normal value (which isn't used in the pixel shader) shouldn't corrupt the data for the texture coordinates, right?

    I have yet to get this to break in a simple example, but I'm on it.

    Edit: I think I've solved it. The GLSL-Compiler drops the normals-attribute when not using them, and so the normals get assigned to the texcoords instead. Still have to learn a lot about OpenGL I guess, but thanks for the help anyways

Reply
  • Hi Pete,

    It fails with float3(0.0,0.0,1.0), too.

    Anyways,hardcoding the normal value (which isn't used in the pixel shader) shouldn't corrupt the data for the texture coordinates, right?

    I have yet to get this to break in a simple example, but I'm on it.

    Edit: I think I've solved it. The GLSL-Compiler drops the normals-attribute when not using them, and so the normals get assigned to the texcoords instead. Still have to learn a lot about OpenGL I guess, but thanks for the help anyways

Children
  • Hi degenerated,

    Were you using gl[Bind|Get]AttribLocation to get/set the handle of the attribute in the shader? glGetAttribLocation after linking will give you the correct handle to the attribute, or -1 if it has been compiled out. If you were assuming the handles of the attributes, then that explains it

    Thanks,

    Chris

  • Hi degenerated,

    you're right it shouldn't corrupt the texture coordinate data - I was assuming (incorrectly it seems) that the texture coordinates were being modified wrongly by offsets calculated from buggy normal values. Without the fragment shader source that was my best guess :-)

    Glad you made progress, as Chris says attribute location seems a likely cause - I've been caught out by the same issue when the compiler spots a uniform or attribute isn't actually used and optimizes it away...

    Cheers, Pete

  • I've been caught out by the same issue when the compiler spots a uniform or attribute isn't actually used and optimizes it away...

    Not optimal, but a conservative way of doing this might be:

    for each (attribute) {
         GLuint attribLoc = glGetAttribLocation(attribute);
         if (attribLoc != -1)
              glVertexAttribPointer(blah blah);
    }
    

    Attrib data always gets pushed to the right place, or not at all if optimized out.

  • Yeah, that sort of what I did last night to fix the issue. Now it's working fine!

    Thanks for all your support!