I'm looking into the details of ASTC and planning for developing in hardware. The spec version 1.0 says it has LDR & full profile. What is the difference between LDR & HDR modes? What is Full profile then ?
How does each modes process data. What is the input data format of LDR mode ? Is HDR accepts only 32bit ieee 784 floating point numbers ? How many partitions a block can have ? How about the decoding, the spec says it has to be bit exact decode. If lot of floating point operations are required, then is it possible to get bit exact decoding, because of approximation of fixed point implementation for floating point calculations ?
Ben,
The comments I made about floating point usage and spec conformance apply to the decoder; sorry if this wasn't clear. If you are implementing an encoder, you are free to accept whatever subset of input best suits your needs, as long as the output is validly encoded.
We don't have another publicly available version of the encoder. It would be possible to restrict the floating point operations, but we were targeting a desktop machine where a 32-bit floating point add or multiply is approximately as fast as the equivalent 32-bit integer operation. We therefore felt that the encoder code would be more performant and more readable if we just kept everything in native floating point. The same should be true on an ARM A-class core.
As far as speeding things up is concerned, I think that your best bet will be to restrict the number of encoding modes that have to be searched. If you have a corpus of representative images, you could analyse the output to see what encoding points are most often used, and which are not. (How many times do you need a 4-partition block, for example?) Then work to remove the rarely used modes from the search algorithm, whilst checking that these don't introduce unacceptable artefacts.
The Visual Studio solution was created with VS 2010, and I believe that it is compatible with VS 2010 Express.
Sean.
Hi Sean,
Even in decoder it retrives the values in ieee32bit float values for rgb (file astc_image_load_store.cpp: function write_imageblock() ) and then scale it by multiplying by 255 and stored as int. So the decoder is also treating the data as float. the data struct which decoder uses is as
follows
float orig_data[MAX_TEXELS_PER_BLOCK * 4]; // original input data
-ben
The software decoder does use floats. However, in the hardware, everything is defined as integer operations.
But then the fixed point implementation of decoder won't be bit-exact with reference floating point implementation of decoder. There will be a difference of +/-1. How then we can pass the conformance criteria of bit-exactness ?
To implement all the floating point operations using fixed point implementation, how many bits of precision would be good enough ? Can I use 8 bit fixed point format in intermediate representation to do all the float ?
Could you please help me in concluding my design decisions. My design on astc encoder will be based on fixed point implementation and no floating point hardware unit. And my fixed point representation will be using 8bit. What will be the impact on encoding if I restrict my encoder using 8bit fixed point ? Is it possible to implement using 8bit fixed point?
Please respond to my queries.
Happy New Year, Ben. I have been away over the Christmas break and so haven't seen your latest questions. Give me a little while and I will get back to you.
Happy New Year Sean
You are free to use any method you like to encode an image, as long as the resulting encoding is legal, and you are happy with how the result looks. The bit-exact requirements are for decoding, so that what you see on hardware from manufacturer X will be the same as from manufacturers Y and Z. This was a primary requirement from content developers, who wanted to make sure that they didn't have to separately requalify their texture assets on all the different target devices.
I hope I can also put your mind at rest about the floating-point reference decoder. The decoder as written does produce the exact same results as the hardware - we have verified this using our internal test suites and at least two external implementations of the decoder.
I understood the decoder part, as you earlier mentioned the output will be UNORM16, which then converted to half float(16bit) and then to full float(32bit). So I if take the output at UNOMRM16 stage and take the upper 8 bits, it should be bitexact and I can avoid all float operations. Please correct me if I'm wrong.
I believe that you are correct, yes.
Thanks for clearing the doubts on decoder. I will start looking into encoding and will have more doubts.
BTW I want to know more about AFBC ( Arm Frame Buffer Compression). Is it part of MALI Graphics IP. What is the typical usecase for AFBC ?
Here are a couple of blogs which help to explain how AFBC is used in the system. First, one by me which fits AFBC into our strategy for reducing whole-system power, and then one from Ola Hugosson which talks about how AFBC is used inside the Mali-V500 video codec.
In reference ASTC decoder,why the fixed point output (8bit) option is not provided ? For encoder any optimized reference code is available ? What all options can be tried to reduce the hardware complexity ?
The evaluation codec is a proof-of-concept of the encode and decode processes. I'm not sure what you mean by 8 bit output not being supported. The LDR output is in 8-bit format, so do you mean for HDR? HDR 8-bit output is supported in the spec in order to cater for sRGB encoded images, and this is supported in the codec too.
The encoder has already been quite extensively optimised - particularly in the selection of color endpoints and other exhaustive inner loops.
When you say "reduce the hardware complexity", do you mean for decode or encode? Encoding hardware isn't mandated, so you are free to take whatever shortcuts you like, as long as you are happy with the result and it's a valuid encoding. However, decoding hardware must support all the possibilities at your chosen feature level (LDR, HDR, or full profile) in order to pass conformance, so your only real option here is to decide which profile to support.
I mean in case of 8-bit sRGB LDR output UNORM 16 is converted to fp16 and to fp32 and then scale back to 8bit. What I need is the output before floating point conversion. ie ((UNORM16) >> 8)