In ASTC encoder, we can use -cl to compress RGB texture and -cs to compress sRGB texture, but why does it need to tell sRGB/RGB texture for astc encoder. From the astc encoder code, I can see some different in processing, but I don't understant why it is needed
(1) astc encoder tries to fit the texels to a line, it relates to the data distributition, fitting sRGB texure or RGB texture will be not different.
(2) ASTC spec CEM, there is only LDR/HDR mode, no LDR_sRGB mode , so sRGB/RGB will share the LDR CEM.
For compression, you don't really need it although it does influence rounding in error calculations.
The reason we have it is that the decoder aims to be accurate to the hardware implementation, and for that linear and sRGB round decompressed outputs at different points in the pipeline. Linear uses a full 16 bit result, whereas sRGB uses the top 8 bits with rounding based off the 9th bit.
ASTC spec CEM, there is only LDR/HDR mode, no LDR_sRGB mode , so sRGB/RGB will share the LDR CEM.
The sRGB-ness of a texture is a property of the texture format set via the graphics API, it's not set in the per-block encoded data.
Thanks
I check the doc
(1) OpenGL ES provide GL_COMPRESSED_RGBA_ASTC/GL_COMPRESSED_SRGB8_ALPHA8_ASTC for linear and sRGB,so it is not needed in the encoded data.
(2) ASTC is specified to decompress texels into fp16 intermediate values, except for sRGB which always decompresses into 8-bit UNORM intermediates. For many use cases this gives more dynamic range and precision than required. This can cause a reduction in both texture cache efficiency and texture filtering performance due to the larger decompressed data size.
so my understanding is that the astcencoder provides -cs -ds -ts to make the test result more close to the hardware implematation