This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Cortex M3 - Literal-pool vs MOVW-MOVT when cache is present

This question is a kind of survey:

Hi folks,

I know this subject, or almost the same, has already be presented, but I don't found an appropriate answer.

First, a short reminder

The Cortex-M3 instruction set offers three ways to load a 32-bit literal (address or constant) into a register:

1/ Using a literal-pool and PC-relative load:

   LDR Rx, [PC, #offset_to_constant]   

2/ Using a couple MOVW/MOVT, to load the constant in two steps:

   MOVW Rx, #least_significant_halfword

   MOVT Rx, #most_significant_halfword

3/ Using the specific 'Flexible Second Operand' (but it is out of scope for my question):

   if a constant :

   - could be obtain by a shift on left of an 8-bit value  (e.g: '000001FE'h = 'FF'h<<1)

   - has the format '00XY00XY'h

   - has the format 'XY00XY00'h

   - has the format 'XYXYXYXY'h

   MOV.W Rx, #constant

Based these elements, I make the following analysis:

[A - instruction timing]

From the instruction timing, we have the following results:

# literal-pool version :

   code size : 6 or 8 bytes depending of relative offset to the constant (6 if offset%4=0 && offset in [0, 1020])

   speed     : 2 cycles for the LDR (or 1 cycle in some case)

# MOVW/MOVT version :

   code size : 8 bytes

   speed     : 2 cycles

[B - Cache]

For a cache usage point of view, more precisely an unified code/data cache (in my case):

As MOVW and MOVT instructions are contiguous, the principle of locality is respected.

For the 'literal-pool' version, as the instruction and the constant pool are separated by some amount of bytes bigger than a cache-line, the principle of locality is not respected and therefore could induce some miss for subsequent data accesses in the system memory space.

[Conclusions]  

# 'literal-pool' needs same number of cycles                  than 'MOVW/MOVT' :  2 cycles vs  2 cycles

# 'literal-pool' takes up less room in pre-fetch unit buffer  than 'MOVW/MOVT' : 16 %      vs 66 %   

# 'literal-pool' needs less instruction fetches on I-Code bus than 'MOVW/MOVT' :  1 fetch  vs  2 fetches

# 'literal-pool' needs more data fetches on D-Code bus        than 'MOVW/MOVT' :  1 fetch  vs  0 fetch 

# 'literal-pool' respects the principle of locality less      than 'MOVW/MOVT' :

# 'literal-pool' doesn't respect the principle of locality for cache programming

# 'MOVW/MOVT'    respects the principle of locality for cache programming

Note: I know that the compiler seems to favor the literal pool version because of the gain for code size when the same constant is used in several places.

After this short analysis, if it is correct, I'm not sure anymore of what is the best strategy to have the best execution speed.

Question 1:

Does anyone have any feedback on this topic ?

Question 2 (subsidiary):

Does anyone know if those elements are taken into account by developement tools like Keil, nowadays?

Thanks.

Parents
  • Hmm, I think I'm on thin ice here.

    N-way - does this mean that there are in fact "multiple cache entries" ?

    associative - does this mean that a cache is associated with an address range ?

    I imagine it means that the cache is 'intelligent' and the least probable cache entries are the ones being recycled.

    -It's only a guess, you'll probably have to ask your MCU vendor about the details on this.

    One coding-style which is very likely to be a success, is to ask yourself: "What would the CPU like to do the most".

    Example: A developer back in the early 90's had two choices for fetching the high-byte of a 16-bit value.

    1: He could use LSR

    2: He could store the 16-bit value in memory and read it as a byte.

    The operation would use the exact same number of clock cycles.

    He chose option 1, the LSR instruction. This was a clever choice, because a few years later, a new processor was available and a new computer was added to the computer family he wrote the program for. The memory access took the same amount of clock-cycles, but the new processor had better instruction-caching, which meant the LSL instruction executed much faster.

    Thus the difference was significant, since this operation was running in a loop.

    Remember that the CPU loves being lazy and it hates accessing external resources.

    If you need to load a lot of immediate data values (eg. in loops), then consider having a 32-bit register holding common values and another 32-bit register holding a bitmask

    Loading a 16-bit value could be done this way...

    We want the value 0x00321800 in r3 and the value 0x00765400 in r2

    We also want the value 0x07654000 in r4

    r7 holds the mask: 0x00ffff00

    r6 holds the data: 0x87654321

    and r3,r7,r6,ror#20

    and r2,r7,r6,ror#4

    and r4,r6,r7,ror#28

    So in the first two instructions, we rotate the data, in the last instruction, we rotate the mask

    ... I had to do this once in some code, which required a lot of constant values and a tight timing.

    I couldn't afford the overhead of a loop; the code had to be completely unrolled.

    In addition I had to squeeze as many pre-loaded values into registers as I could.

    This solution changed the task from being 'impossible' to just barely become possible; if it had required one more clock-cycle anywhere, that would break everything.

Reply
  • Hmm, I think I'm on thin ice here.

    N-way - does this mean that there are in fact "multiple cache entries" ?

    associative - does this mean that a cache is associated with an address range ?

    I imagine it means that the cache is 'intelligent' and the least probable cache entries are the ones being recycled.

    -It's only a guess, you'll probably have to ask your MCU vendor about the details on this.

    One coding-style which is very likely to be a success, is to ask yourself: "What would the CPU like to do the most".

    Example: A developer back in the early 90's had two choices for fetching the high-byte of a 16-bit value.

    1: He could use LSR

    2: He could store the 16-bit value in memory and read it as a byte.

    The operation would use the exact same number of clock cycles.

    He chose option 1, the LSR instruction. This was a clever choice, because a few years later, a new processor was available and a new computer was added to the computer family he wrote the program for. The memory access took the same amount of clock-cycles, but the new processor had better instruction-caching, which meant the LSL instruction executed much faster.

    Thus the difference was significant, since this operation was running in a loop.

    Remember that the CPU loves being lazy and it hates accessing external resources.

    If you need to load a lot of immediate data values (eg. in loops), then consider having a 32-bit register holding common values and another 32-bit register holding a bitmask

    Loading a 16-bit value could be done this way...

    We want the value 0x00321800 in r3 and the value 0x00765400 in r2

    We also want the value 0x07654000 in r4

    r7 holds the mask: 0x00ffff00

    r6 holds the data: 0x87654321

    and r3,r7,r6,ror#20

    and r2,r7,r6,ror#4

    and r4,r6,r7,ror#28

    So in the first two instructions, we rotate the data, in the last instruction, we rotate the mask

    ... I had to do this once in some code, which required a lot of constant values and a tight timing.

    I couldn't afford the overhead of a loop; the code had to be completely unrolled.

    In addition I had to squeeze as many pre-loaded values into registers as I could.

    This solution changed the task from being 'impossible' to just barely become possible; if it had required one more clock-cycle anywhere, that would break everything.

Children