I'm looking into the details of ASTC and planning for developing in hardware. The spec version 1.0 says it has LDR & full profile. What is the difference between LDR & HDR modes? What is Full profile then ?
How does each modes process data. What is the input data format of LDR mode ? Is HDR accepts only 32bit ieee 784 floating point numbers ? How many partitions a block can have ? How about the decoding, the spec says it has to be bit exact decode. If lot of floating point operations are required, then is it possible to get bit exact decoding, because of approximation of fixed point implementation for floating point calculations ?
Hi Sean,
Could you please respond to my queries, posted above.
Regards,
-ben
Ben,
Sorry for the delay - I haven't checked in for a while.
If you are not supporting 2bpp (or only supporting 2bpp, I'm not sure which), the consequence is that you will not pass the conformance test, and will not be able to claim ASTC support. Khronos rules are pretty clear on spec conformance matters.
One of the main reasons we introduced ASTC in the first place was feedback from the content developers that the market was too fragmented. The Khronos group therefore ratified the specs with very little room for deviation, so that developers could guarantee results across different platforms. Decode must be bit exact, and all features in the spec must be present - this includes support for all block sizes.
The layout of the blocks in the image is defined in the spec and should start with the block closest to the (s=0, t=0) corner of the image. How we map that to the (x,y) pixels in the image, however, is not specified. I will have to investigate the vertical flip.
There may possibly be a real-time use case for ASTC, which is to use render-to-texture to create relatively long-lived textures such as a skybox. However this will require a specialized encoder optimized for the specific type of images being produced in order to constrain the search space and approach real-time frame rates.
Sean.
Thanks for the details. I need some help on understanding the encoder source code, is there any detailed notes on implementation. As I mentioned earlier we are planning to support only LDR profile and 8bit inputs.
While going through the encoder source code, there are lot of float usage in the code. For example in function
fetch_imageblock() there are use of float variables, (float data[6] and float *fptr = pb->orig_data;) etc. These floats are represented in ieee32bit floating point values. From the earlier conv messages you have mentioned that all are in fixed point version of float implementation. But I don't see any fixed point conversion happening in the code. Is there a fixed point implementation of the ASTC code available ? For our hardware realization it would be helpful if we can have that source code.
The Visual studio solution was not compiling in VS2008. Which version of VS I should use ?
Kindly provide the required details.
The comments I made about floating point usage and spec conformance apply to the decoder; sorry if this wasn't clear. If you are implementing an encoder, you are free to accept whatever subset of input best suits your needs, as long as the output is validly encoded.
We don't have another publicly available version of the encoder. It would be possible to restrict the floating point operations, but we were targeting a desktop machine where a 32-bit floating point add or multiply is approximately as fast as the equivalent 32-bit integer operation. We therefore felt that the encoder code would be more performant and more readable if we just kept everything in native floating point. The same should be true on an ARM A-class core.
As far as speeding things up is concerned, I think that your best bet will be to restrict the number of encoding modes that have to be searched. If you have a corpus of representative images, you could analyse the output to see what encoding points are most often used, and which are not. (How many times do you need a 4-partition block, for example?) Then work to remove the rarely used modes from the search algorithm, whilst checking that these don't introduce unacceptable artefacts.
The Visual Studio solution was created with VS 2010, and I believe that it is compatible with VS 2010 Express.
Even in decoder it retrives the values in ieee32bit float values for rgb (file astc_image_load_store.cpp: function write_imageblock() ) and then scale it by multiplying by 255 and stored as int. So the decoder is also treating the data as float. the data struct which decoder uses is as
follows
float orig_data[MAX_TEXELS_PER_BLOCK * 4]; // original input data
The software decoder does use floats. However, in the hardware, everything is defined as integer operations.
But then the fixed point implementation of decoder won't be bit-exact with reference floating point implementation of decoder. There will be a difference of +/-1. How then we can pass the conformance criteria of bit-exactness ?
To implement all the floating point operations using fixed point implementation, how many bits of precision would be good enough ? Can I use 8 bit fixed point format in intermediate representation to do all the float ?
Could you please help me in concluding my design decisions. My design on astc encoder will be based on fixed point implementation and no floating point hardware unit. And my fixed point representation will be using 8bit. What will be the impact on encoding if I restrict my encoder using 8bit fixed point ? Is it possible to implement using 8bit fixed point?
Please respond to my queries.
Happy New Year, Ben. I have been away over the Christmas break and so haven't seen your latest questions. Give me a little while and I will get back to you.
Happy New Year Sean
You are free to use any method you like to encode an image, as long as the resulting encoding is legal, and you are happy with how the result looks. The bit-exact requirements are for decoding, so that what you see on hardware from manufacturer X will be the same as from manufacturers Y and Z. This was a primary requirement from content developers, who wanted to make sure that they didn't have to separately requalify their texture assets on all the different target devices.
I hope I can also put your mind at rest about the floating-point reference decoder. The decoder as written does produce the exact same results as the hardware - we have verified this using our internal test suites and at least two external implementations of the decoder.
I understood the decoder part, as you earlier mentioned the output will be UNORM16, which then converted to half float(16bit) and then to full float(32bit). So I if take the output at UNOMRM16 stage and take the upper 8 bits, it should be bitexact and I can avoid all float operations. Please correct me if I'm wrong.
I believe that you are correct, yes.
Thanks for clearing the doubts on decoder. I will start looking into encoding and will have more doubts.
BTW I want to know more about AFBC ( Arm Frame Buffer Compression). Is it part of MALI Graphics IP. What is the typical usecase for AFBC ?
Here are a couple of blogs which help to explain how AFBC is used in the system. First, one by me which fits AFBC into our strategy for reducing whole-system power, and then one from Ola Hugosson which talks about how AFBC is used inside the Mali-V500 video codec.