I'm doing rendering of non-color data to an RGBA4444 texture, and I'm seeing some difference in how the output from the alpha channel gets stored in the texture. It seems somehow that the alpha channel gets it's range slightly compressed, leading to higher contrast for the contents.
This happens on a Samsung Galaxy S3, which has a Mali-400 GPU. On other GPUs, I don't see these issues. GL_DITHER is disabled.
OpenGL ES 2.0 defines the same transformation from floating-point to the framebuffer's fixed-point representation for RGB and A in this case, so AFAICT there shouldn't be any difference.
Why does this happen? Am I misunderstanding the spec? If not, is there something I can do to work around the issue?
No, this isn't related to precision in the shader unit; I write the same value to all components of gl_FragColor. The value output through the alpha-channel is the only one with a different value.
Making a repro-case is a bit hard right now, as this happened in a customer's application which is rather involved. But I'll give it a go if no other ideas pop up.
Hi kusma,
An example shader(s) and set of input data would be helpful enough in this case I think as well as your expected results. Someone else may pipe up with some more possibilities also.
Thanks,
Chris