For more information on ASTC, take a look at the ARM Multimedia Blog posts "ASTC Texture Compression: ARM Pushes the Envelope in Graphics Technology" and "ARM Unveils Details of ASTC Texture Compression at HPG Conference".
I have started this thread for users of this evaluation tool to ask questions. Here's a very quick "getting started" guide:
First, accept the license, download the tarball and unpack. In the subdirectories Win32, Mac OS X and Linux32 are binaries for, you guessed it, Windows, Mac OS X, and Linux (x86 versions). If you are running on another system, you might like to try compiling from source - take a look at Source/buildinstructions.txt .
Open a terminal, change to the appropriate directory for your system, and run the astcenc encoder program, like this on Linux or Mac OS:
./astcenc
Or like this on Windows:
astcenc
Invoking the tool with no arguments gives a very extensive help message, including usage instructions, and details of all the possible options.
First, find a 24-bit .png or .tga file you wish to use, say /images/example.png (or on windows C:\images\example.png).
You can compress it using the -c option, like this (use the first line for Linux or Mac OS, second line for Windows users):
./astcenc -c /images/example.png /images/example-compressed.astc 6x6 -mediumastcenc -c C:\images\example.png C:\images\example-compressed.astc 6x6 -medium
./astcenc -c /images/example.png /images/example-compressed.astc 6x6 -medium
astcenc -c C:\images\example.png C:\images\example-compressed.astc 6x6 -medium
The -c indicates a compression operation, followed by the input and output filenames. The block footprint size follows, in this case 6x6 pixels, then the requested compression speed, medium.
To decompress the file again, you should use:
astcenc -d /images/example-compressed.astc /images/example-decompressed.tgaastcenc -d C:\images\example-compressed.astc C:\images\example-decompressed.tga
astcenc -d /images/example-compressed.astc /images/example-decompressed.tga
astcenc -d C:\images\example-compressed.astc C:\images\example-decompressed.tga
The -d indicates decompression, followed by the input and output filenames. The output file will be an uncompressed TGA image.
If you just want to test what compression and decompression are like, use the test mode:
astcenc -t /images/example.png /images/example-decompressed.tga 6x6 -mediumastcenc -c C:\images\example.png C:\images\example-compressed.tga 6x6 -medium
astcenc -t /images/example.png /images/example-decompressed.tga 6x6 -medium
astcenc -c C:\images\example.png C:\images\example-compressed.tga 6x6 -medium
This is equivalent to compressing and then immediately decompressing again, and it also prints out statistics about the fidelity of the resulting image, using the peak signal-to-noise ratio.
Take a look at the input and output images.
The block footprints go from 4x4 (8 bits per pixel) all the way up to 12x12 (0.89 bits/pixel). Like any lossy codec, such as JPEG there will come a point where selecting too aggressive a compression results in inacceptable quality loss, and ASTC is no exception. Finding this optimum balance between size and quality is one place where ASTC excels since its compression ratio is adjustable in much finer steps than other texture codecs.
The compression speed runs from -veryfast, through -fast, -medium and -thorough, up to -exhaustive. In general, the more time the encoder has to spend looking for good encodings, the better the results.
So, download, run, have a play, and post any questions or results on this thread.
Hi Sean,
ASTC encoder could encode blocks (1x1 to 12x12 size) in image/frame with diffrenet texel color formats. So here having this how the decoded image/frame with varing texel color formats @ block level is handled in the display side.
Or it is expected that for given image/frame all the blocks should be encoded with one selected textel color format?
Kindly clarify this.
Thanks,
Devendran Mani.
> Or it is expected that for given image/frame all the blocks should be encoded with one selected textel color format?
The only thing which is fixed in ASTC is the block size (as this is needed to allow random addressing) and some of the macro-scale options (LDR or HDR, 2D or 3D).
Nearly all of the properties of the block, including which block type to use, are chosen per block by the compressor, so in single image you may have multiple blocks compressed using different types of base compression. For example areas of a color image which are close to greyscale may be compressed using a luminosity block if that gives better error rates.
Even though the compression block chosen may vary over the image all texture samples will decode back to source data type (e.g., RGB for a color texture).
HTH,
Pete
Hi Pete,
Thanks for the details. Kindly clarify following questions also.
In reference code of ASTC "ASTC-Evaluation-Codec-1.3_sdk_backup"":
1) DECODE_LDR_SRGB (-ds) decode mode and config option "-sRGB" -> what is the difference in functionality in a decoder?
2) For decode mode DECODE_LDR_SRGB, there is some difference in excution for noraml block(non void-extent) and void-extent block
--> If block coded as normal block: interpolated color is replicating in lower and higher bytes of 16 bit (if (decode_mode == DECODE_LDR_SRGB)color = color | (color << 8);) and then applying sf16_to_float(unorm16_to_sf16((uint16_t) color));
--> If block coded as constant color: constant_color & 0xFF00 and applying the sf16_to_float(unorm16_to_sf16((uint16_t) color)); why is this difference?
3) swizzle pattern: Is all combination of swizzlepattern is valid ? (r, g, b, a, 0, 1, z)
4) In reference code fro sRGB conversion "gamma factor = 2.4", do we need consider other gamma factors also?
Mani
Hi Mani,
Sean may have to correct me , but my best attempt at answering below. Also worth noting that I would suggest reading the Khronos extension spec for ASTC - it tends to be a better explanation of the decode step than the source code:
http://www.khronos.org/registry/gles/extensions/OES/OES_texture_compression_astc.txt
> 1) DECODE_LDR_SRGB (-ds) decode mode and config option "-sRGB" -> what is the difference in functionality in a decoder?
The -ds option actually encodes the compressed data as SRGB in the encoding. The -sRGB option converts an SRGB source image to a linear RGB image before compressing it (so the compressed data is actually linear RGB).
> 2) For decode mode DECODE_LDR_SRGB, there is some difference in execution for normal block(non void-extent) and void-extent block. <snip> Why is this difference?
I'll let Sean answer this one - I'm not familiar with the compressor at this level of detail.
> 3) swizzle [-dsw] pattern: Is all combination of swizzlepattern is valid?
Yes, quite probably, although note that this is just a parameter to the encoder to reorder color channels before calling the actual compressor function. It's not a property of the compressed encoding itself.
> 4) In reference code fro sRGB conversion "gamma factor = 2.4", do we need consider other gamma factors also?
No. sRGB is a defined color space encoding - with the gamma correction constant of (mostly) a exponent of 2.4. Note that this is an approximation of the actual sRGB conversion - most of the curve is a exponent 2.4 curve, but the function is not identical over the entire input space (it has a small linear part near the origin).
See the "reverse transform" section here: sRGB - Wikipedia, the free encyclopedia
Mani,
Pete's answers are correct. I double-checked the swizzle code and it is completely general. You can even select "1111" if you wish, although it's unlikely to be useful.
As for the difference in sRGB decoding for void-extent blocks, I am not sure why this is the case. There does not seem to be any particular reason for it. In the specification, the sRGB decoder is assumed to work only on the top 8 bits of the color value, so the two operations should be effectively identical. I will check with the original author and see if there is a reason for this difference.
.
Sean.
Mr. Pete has given correct details. I thank him for that.
Kindly clarify below details also.
Mr. Sean: As for the difference in sRGB decoding for void-extent blocks, I am not sure why this is the case. There does not seem to be any particular reason for it. In the specification, the sRGB decoder is assumed to work only on the top 8 bits of the color value, so the two operations should be effectively identical.
Mani another Question: Input to unorm16_to_sf16() function for void-extent and noraml block are different. For Ex. decoded color value = 255 input to function unorm16_to_sf16(), if block is void-extent is 0xFF00 and if normal block it is 0xFFFF.
There is difference in final color output. is it intended?
I have looked into this further and the different handling of void-extent blocks for sRGB was made in response to incorrect rounding of sRGB values during decode. The software codec converts the values to floating point and I will have to create a test to characterize the problem and propose a solution, if one is indeed required.
To be clear, this problem should not affect a hardware decoder, as the hardware solution is defined to directly return the top 8 bits in the sRGB case with no conversion.
Few more questions:
Question-1: I assume when you mean hardware it is Mali GPU ASTC decoder. so in that case GPU does not support sRGB coversion. please confirm this.
Question-2: Incase of normal maps the decoder swizzle pattern is "raz1" then "z" derivation is sqrt(1-r^2-a^2) is supported Mali GPU ASTC decoder.
Devmani,
By "hardware" I do indeed mean the GPU ASTC decoder. Since OpenGL ES mandates that textures may be stored in sRGB color space, and the sRGB-to-linear conversion is quite complex, it is usually supported in hardware so that linear RGB values are returned to the shader pipeline. This is true for all the existing mobile GPUs that I am aware of.
Normal maps, however, are usually just stored as RG textures (X in the R channel, Y in the G channel), and just the X and Y are returned to the shader pipeline. Normal maps are less often used than color maps, and for many use cases the Z component is not required by the shader. Any block of hardware to calculate Z directly would be rarely used, so it is usually left up to the shader to calculate Z if it is required using a relatively simple square root operation. Again, this behavior is the same for all the mobile GPUs.
The reason we include the "z" swizzle on output from the software decoder is so that it is easier for conventional three-component imaging tools to measure the fidelity of the output image.
I have one more questions:
LDR endpoint Decoding section(3.8) in ASTC specification:
"The bit_transfer_signed procedure transfers a bit from one signed byte value (a) to another (b). The result is an 8-bit signed integer value and a 6-bit integer value sign extended to 8 bits."
I understand that "a" is 6 bit signed value and ranges from -32 to 31 but as menctioned above line in specification, Is "b" is 8 bit signed value? (-128 to 127)
If "b" is not signed value: why do we need to clamp (clamp_unorm8(eo)) in Luminance+Alpha, Base_offest mode (mode#4) ?
Kindly clarify.
Devendran Mani
The current ASTC codec only outputs compressed data to .astc files, correct?
Are there any plans to use the compressed texture capabilities of the .ktx and .dds file formats?