This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

How the current consumption is affected by instruction address

Hello to all,

I am willing to know the variation in the current consumption due to the instruction address. Therefore I have performed two experiments, first time filled the pipeline with a 32-bit instruction and second time filled it with the 16-bit instruction. Then I have observed the effect of energy consumption due to that. I saw a difference in the energy behavior. Can any body explain me why such?

I have attached the result below:

For 32-bit NOP instruction (Filled Pipeline)

16-bit NOP instruction (Filled Pipeline)

So, a clear difference can be observed in the patterns of current consumption. So, I have few doubt:

  1. What is the reason behind such pattern present in the 16-bit instruction experiments?
  2. Even-tough both has little endian, then what is different happening in that.
  3. Since while looking at disassembly I have observed that the instruction address increment is 4 increase for 32-bit but 2 in case of 16-bit. So is that the reason?
  4. Assume that it is little endian, the memory look like the following to the processor:

    ...

    Byte[0xB], Byte[0xA], Byte[9], Byte[8].

    Byte[7], Byte[6], Byte[5], Byte[4].

    Byte[3], Byte[2], Byte[1], Byte[0].

    The addresses for each byte lane are identical. So is there any assumption that the current consumption for each row is similar ???Since in case of 16-bit only 2-byte lanes will be activated but for 32-bit all 4-byte lanes. 

  5. If the current consumption for each row is same then an another experiment has been performed, where in the pipeline is filled with 32-bit instruction except the very first instruction which is 16-bit instruction at the beginning. So, while looking at disassembly we can see that every time the 32-bit instruction has to activate 2 2-byte lanes from different rows. Since the 26-bit instruction is present in the beginning of the pipeline. But still no variation has been observed but when the same 16-bit instruction is placed in the middle of the filled pipeline, random jumps start generating in the resultant graph, as shown below. So, what is the reason behind this??

One 16-bit NOP instruction in the pipeline of 32-bit NOP instruction

Kindly help me out with that. All the experiments have been performed on ARM Cortex-M4

Thanking you,

Kind Regards,

Himanshu

Parents
  • The CM4 has only a 3 stage pipeline.

    See this (from the TRM):

    "ICode memory interface
    Instruction fetches from Code memory space, 0x00000000 to 0x1FFFFFFC, are performed over this
    32-bit AHB-Lite bus.
    The Debugger cannot access this interface. All fetches are word-wide. The number of
    instructions fetched per word depends on the code running and the alignment of the code in
    memory."

    So if proper aligned you can fetch two 16bit instruction at a time. So for the same number of instruction you do fetch only half the words.
    BTW: I am not sure how you are measuring, but if on a real chip, you also have the power consumption of the AHB and the Flash.

Reply
  • The CM4 has only a 3 stage pipeline.

    See this (from the TRM):

    "ICode memory interface
    Instruction fetches from Code memory space, 0x00000000 to 0x1FFFFFFC, are performed over this
    32-bit AHB-Lite bus.
    The Debugger cannot access this interface. All fetches are word-wide. The number of
    instructions fetched per word depends on the code running and the alignment of the code in
    memory."

    So if proper aligned you can fetch two 16bit instruction at a time. So for the same number of instruction you do fetch only half the words.
    BTW: I am not sure how you are measuring, but if on a real chip, you also have the power consumption of the AHB and the Flash.

Children
No data