I'm looking into the details of ASTC and planning for developing in hardware. The spec version 1.0 says it has LDR & full profile. What is the difference between LDR & HDR modes? What is Full profile then ?
How does each modes process data. What is the input data format of LDR mode ? Is HDR accepts only 32bit ieee 784 floating point numbers ? How many partitions a block can have ? How about the decoding, the spec says it has to be bit exact decode. If lot of floating point operations are required, then is it possible to get bit exact decoding, because of approximation of fixed point implementation for floating point calculations ?
Ben,
There three profiles for ASTC are supersets of each other, as follows:
For input data, the evaluation codec (available from ASTC Evaluation Codec) takes 8-bit UNORM values for LDR, input as an image, usually in TGA, BMP, GIF or PNG format. For HDR, we take 16-bit pixel values in KTX or DDS formats. The encoder itself works with 16-bit IEEE 754 floats, which are stored in a pseudo-logarithmic format internally to the ASTC data.
A block may have from 1 to 4 partitions, each with a separate set of color endpoints. In addition, there is the option to specify a second set of weights for one channel of the image data. This allows more flexible encoding for textures with uncorrelated channels, such as X+Y for normal mapping, or L+A or RGB+A for decals.
The requirement for bit-exactness was requested by the content developers, as it makes it very much easier to qualify content for multiple platforms if the output is guaranteed. The decoder is specified very exactly using integer operations on the internal representation of HDR data, which synthesise the floating-point operations. This allows us to specify the exact bit-patterns delivered to the filtering stage of the texture pipeline. After that, of course, we have to place our trust in the filter implementation.
I hope that this is helpful.
Hi Sean,
Thanks for the details.
So, can I assume in LDR mode all the internal operations of encoder/decoder are in integer format , and no floating point operations for LDR ?
I'm not clear on partitions, it says the block can use one of 2048 partition patterns, what is 2k(11bit) corresonds to ?
>>>>>> A block may have from 1 to 4 partitions, each with a separate set of color endpoints. In addition, there is the >>>>>option to specify a second set of weights for one channel of the image data.
Is this the plane encoding(second set of weights) mentioned in spec ?
In HDR case, only 16bit ieee floating point operations are performed and no 32bit ieee floating point operations ?
I understood each block can be encoded/decoded in parallel, because of no dependency, however the concept of "void extent block" is creating a dependcy on neighboring blocks ? How does the encoding of void extent blocks happen, if it has to signify the presence of neighboring colors? How many neighbors it considers ?
-ben
Yes, all LDR operations are defined internally using integers.
I see what you mean about the partitions now. Yes, each block can choose one of 2048 possible partition patterns - but it can also choose whether the partition pattern selects between 2, 3 or 4 sets of colors.
The dual-plane modes are documented in the spec, under "Dual-Plane Decoding".
The HDR operations are all specified as integer operations, with the final bit pattern being interpreted as a 16-bit floating point value. There aren't actually any floating point operations inside the decoder, for exactly the reasons you have identified.
The void-extent concept is an optional optimization, and is fairly flexible. A constant-color block can specify a larger rectangle over which the color remains constant (this is the "void-extent"), and this may cover adjacent blocks too. Thus, if you have already seen any of the blocks in that constant-color area, the decoder can skip a block fetch and just reuse the one it has already seen. For faster encoding, you can always encode a block of constant color as having no additional extent, which requires no dependency on neighbouring blocks.
Sean.
I'm having the ASTC spec 1.0 pdf got from the evaluation codec kit 1.3 and in that it has been mentioned in Table 4, for LDR returned value is Vector of FP16 values or vector of UNORM8. According to you it should return only UNORM8 for LDR. It also mentions that for LDR "LDR endpoint decoding precision 16 bits, or 8 bits for sRGB" Could you please clarify it.
Any updated spec is available ?
Does the evaluation codec support HDR & Full profile ?
The latest spec is the extension specification on the Khronos website - https://www.khronos.org/registry/gles/extensions/OES/OES_texture_compression_astc.txt
The 16-bit output is to allow filtering without having to know which profile is being decoded. If your filtering unit only takes 8-bit input, then I think that it is acceptable to take the top 8-bits of the 16-bit UNORM result without changing it to floating point.
Similarly, the internal precision of the interpolations between weigths is 16-bits, so even with a LDR input it is possible to get 16-bit outputs.
The evaluation codec supports full profile, which includes the HDR and LDR profiles. By default, only LDR endpoints are considered, so if you want to encode as HDR you should also supply the "-hdr" command line switch.
Sean
Is there any fields which specify the maximum width & height of the image being encoded? Because of area limitations we won't be able to support HDR profile in hardware for both encoder/decoder and only LDR at 8x8 block size(2bpp). What is the best quality I can get with 2bpp bitrate for natural/synthetic images ? Do you see any consequences in not supporting HDR & Full profile and LDR at 2bpp?
While debugging the code, I noticed the image is flipped and accessed for encoding under file astc_stb_tga.cpp line no:46 "y_flip" Why the image is accessed bottom top ? Is this a requirement ?
Any real-time application use case for astc encoding ?
Could you please respond to my queries, posted above.
Regards,
Sorry for the delay - I haven't checked in for a while.
If you are not supporting 2bpp (or only supporting 2bpp, I'm not sure which), the consequence is that you will not pass the conformance test, and will not be able to claim ASTC support. Khronos rules are pretty clear on spec conformance matters.
One of the main reasons we introduced ASTC in the first place was feedback from the content developers that the market was too fragmented. The Khronos group therefore ratified the specs with very little room for deviation, so that developers could guarantee results across different platforms. Decode must be bit exact, and all features in the spec must be present - this includes support for all block sizes.
The layout of the blocks in the image is defined in the spec and should start with the block closest to the (s=0, t=0) corner of the image. How we map that to the (x,y) pixels in the image, however, is not specified. I will have to investigate the vertical flip.
There may possibly be a real-time use case for ASTC, which is to use render-to-texture to create relatively long-lived textures such as a skybox. However this will require a specialized encoder optimized for the specific type of images being produced in order to constrain the search space and approach real-time frame rates.
Thanks for the details. I need some help on understanding the encoder source code, is there any detailed notes on implementation. As I mentioned earlier we are planning to support only LDR profile and 8bit inputs.
While going through the encoder source code, there are lot of float usage in the code. For example in function
fetch_imageblock() there are use of float variables, (float data[6] and float *fptr = pb->orig_data;) etc. These floats are represented in ieee32bit floating point values. From the earlier conv messages you have mentioned that all are in fixed point version of float implementation. But I don't see any fixed point conversion happening in the code. Is there a fixed point implementation of the ASTC code available ? For our hardware realization it would be helpful if we can have that source code.
The Visual studio solution was not compiling in VS2008. Which version of VS I should use ?
Kindly provide the required details.
The comments I made about floating point usage and spec conformance apply to the decoder; sorry if this wasn't clear. If you are implementing an encoder, you are free to accept whatever subset of input best suits your needs, as long as the output is validly encoded.
We don't have another publicly available version of the encoder. It would be possible to restrict the floating point operations, but we were targeting a desktop machine where a 32-bit floating point add or multiply is approximately as fast as the equivalent 32-bit integer operation. We therefore felt that the encoder code would be more performant and more readable if we just kept everything in native floating point. The same should be true on an ARM A-class core.
As far as speeding things up is concerned, I think that your best bet will be to restrict the number of encoding modes that have to be searched. If you have a corpus of representative images, you could analyse the output to see what encoding points are most often used, and which are not. (How many times do you need a 4-partition block, for example?) Then work to remove the rarely used modes from the search algorithm, whilst checking that these don't introduce unacceptable artefacts.
The Visual Studio solution was created with VS 2010, and I believe that it is compatible with VS 2010 Express.
Even in decoder it retrives the values in ieee32bit float values for rgb (file astc_image_load_store.cpp: function write_imageblock() ) and then scale it by multiplying by 255 and stored as int. So the decoder is also treating the data as float. the data struct which decoder uses is as
follows
float orig_data[MAX_TEXELS_PER_BLOCK * 4]; // original input data
The software decoder does use floats. However, in the hardware, everything is defined as integer operations.
But then the fixed point implementation of decoder won't be bit-exact with reference floating point implementation of decoder. There will be a difference of +/-1. How then we can pass the conformance criteria of bit-exactness ?
To implement all the floating point operations using fixed point implementation, how many bits of precision would be good enough ? Can I use 8 bit fixed point format in intermediate representation to do all the float ?
Could you please help me in concluding my design decisions. My design on astc encoder will be based on fixed point implementation and no floating point hardware unit. And my fixed point representation will be using 8bit. What will be the impact on encoding if I restrict my encoder using 8bit fixed point ? Is it possible to implement using 8bit fixed point?
Please respond to my queries.