This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Cortex A8 Instruction Cycle Timing

Note: This was originally posted on 17th March 2011 at http://forums.arm.com

Hi) sorry for bad English

I need to count latency for two instruction, and all I have is the arm cortex A 8 documantation(charter 16) !
but I have no idea how can do this work using that documantation(
Parents
  • Note: This was originally posted on 11th August 2011 at http://forums.arm.com


    I thought I remembered issuing on both first and last cycle but I'm having trouble doing it now too. I'm also having trouble getting the loop you mentioned earlier down to 10 cycles. It looks like it's taking at least 12. The entire loop is taking 14 - since there is stalling, it's difficult to tell how much, if any, is overlapping the 2 cycles of integer loop overhead. You would think that at least one cycle would be overlapped since it's purely a fetch cycle.


    As I said 10 cycles is the NEON code time
    the full code is this one

    movw   r1, #:lower16:coef
    movt   r1, #:upper16:coef

    add   r2, r1, #16
    add   r3, r2, #16
    add   r4, r3, #16
    b    .loop1
    .align 4
    .loop1:

    vld1.32 {d16,d17},[r1:128]
    vmul.f32 d0,d15,d14
    vld1.32 {d18,d19},[r2:128]
    vmul.f32 d1,d15,d14
    vld1.32 {d20,d21},[r3:128]
    vmul.f32 d2,d15,d14
    vld1.32 {d22,d23},[r4:128]
    vmul.f32 d3,d15,d14
    vld1.32 {d24,d25},[r1:128]
    vmul.f32 d4,d15,d14
    vld1.32 {d26,d27},[r2:128]
    vmul.f32 d5,d15,d14
    vld1.32 {d28,d29},[r3:128]
    vmul.f32 d6,d15,d14
    vld1.32 {d30,d31},[r4:128]
    vmul.f32 d7,d15,d14

    smuad   r10, r10, r10
    nop
    nop
    smuad   r11, r11, r11
    nop
    subs   r0, r0, #1
    smuad   r12, r12, r12
    bgt   .loop1


    I'n using this code because I'm sure that the end ARM code take exactly 5 cycles and let 2 bubbles in the pipeline for the branch.
    Remember this post ;) http://pulsar.webshaker.net/2011/04/17/focus-on-branch-instructions/


    The number of cycles stays the same for me regardless of if I load to different registers or using different base registers with the same arrangement as in your example. Maybe we're using different versions of Cortex-A8? I'm using OMAP3530, how about you?

    I have a beagleboard XM (DM3730). But the processor is not the problem (i believe). Try the code I give and tell me if you found 15 cycles (10 for NEON part and 5 for ARM part).


    Here are some interesting things I've observed:

    1) If I add one or two pairs of nops in the middle I get the same speed (14 cycles for the loop). If I add a third pair the speed goes down to 13 cycles. With the fourth pair it goes back up to 14 cycles, and with every pair after that it adds 2 cycles. So, with 3 nop pairs I get no stalls in the NEON code, because there are 12 pairs of instructions (+1 cycle for fetch stall).

    2) If I change three or more of the vld1s to independent vext.8 I get 10 cycles, or full pairing. Same with vmovn, vswp, vrev16, vzip, and vuzp. So the bottleneck is not dual-issue, it's loads and stores.

    3) If I change to 64-bit loads instead of 128-bit I still get 14 cycles for the loop. So I don't think it's a bandwidth limitation.

    4) If I change to 64-bit or 128-bit store I get 21 cycles for the loop. However, here if I store to separate 16-byte addresses in a 64-byte block I get something like 15.5 cycles (this is with a cache-line aligned destination). This is probably due to coalescing filling a whole cache line in the write buffer, where otherwise the cache line has to be loaded. I tried "warming" the buffer by memcpying it to itself to make sure it was in L1 cache, but that didn't make a difference.

    5) If I change the vmul.f32s to vmla.f32 things get bad. If I start at a baseline of no-pairing I get the expected 9 cycles. Then pairing a single vmovn turns it into 12. And from there every new pair adds 4 cycles. I get the same cycles with vrecps.f32, and presumably will with the other chained pipeline instructions.

    So I guess the lessons are to not do too many loads/stores in a row, and that chained pipeline instructions hate being dual issued with anything for some reason. We should do some more testing to see if there are any other instructions that cause a big penalty over dual-issue like this.


    I do not understand the point 5 and how you get 9 cycles !!!

    I think the load process of NEON is very complex and it is not easy to really understand it.
    I've stopped to try to understand it because test context in never the same as realtime context.

    The best we can do for the moment is to use some guidelines:
    - don't use the same buffer for read and for write (when it's possible)
    - don't update data with the ARM while you're reading them with NEON (in a loop I mean). in general case avoid to access to the same datas with ARM and NEON load and store operations. You can do that only if the ARM and NEON are doing LOADs.
    - try to load (if it's possible) long time before using datas (there is enough registers to load the datas of the next iteration during the previous one).
    - try to write as soon as possible (that's to say as soon as the register are available for VSAVE).
    - use alignment if it's possible.
    - and now ;) don't read the same memory bloc with consecutive VLOAD

    It could be usefull to understand the real LOAD and STORE NEON process, but I think you could search for you all life without understanding it !

    Etienne.
Reply
  • Note: This was originally posted on 11th August 2011 at http://forums.arm.com


    I thought I remembered issuing on both first and last cycle but I'm having trouble doing it now too. I'm also having trouble getting the loop you mentioned earlier down to 10 cycles. It looks like it's taking at least 12. The entire loop is taking 14 - since there is stalling, it's difficult to tell how much, if any, is overlapping the 2 cycles of integer loop overhead. You would think that at least one cycle would be overlapped since it's purely a fetch cycle.


    As I said 10 cycles is the NEON code time
    the full code is this one

    movw   r1, #:lower16:coef
    movt   r1, #:upper16:coef

    add   r2, r1, #16
    add   r3, r2, #16
    add   r4, r3, #16
    b    .loop1
    .align 4
    .loop1:

    vld1.32 {d16,d17},[r1:128]
    vmul.f32 d0,d15,d14
    vld1.32 {d18,d19},[r2:128]
    vmul.f32 d1,d15,d14
    vld1.32 {d20,d21},[r3:128]
    vmul.f32 d2,d15,d14
    vld1.32 {d22,d23},[r4:128]
    vmul.f32 d3,d15,d14
    vld1.32 {d24,d25},[r1:128]
    vmul.f32 d4,d15,d14
    vld1.32 {d26,d27},[r2:128]
    vmul.f32 d5,d15,d14
    vld1.32 {d28,d29},[r3:128]
    vmul.f32 d6,d15,d14
    vld1.32 {d30,d31},[r4:128]
    vmul.f32 d7,d15,d14

    smuad   r10, r10, r10
    nop
    nop
    smuad   r11, r11, r11
    nop
    subs   r0, r0, #1
    smuad   r12, r12, r12
    bgt   .loop1


    I'n using this code because I'm sure that the end ARM code take exactly 5 cycles and let 2 bubbles in the pipeline for the branch.
    Remember this post ;) http://pulsar.webshaker.net/2011/04/17/focus-on-branch-instructions/


    The number of cycles stays the same for me regardless of if I load to different registers or using different base registers with the same arrangement as in your example. Maybe we're using different versions of Cortex-A8? I'm using OMAP3530, how about you?

    I have a beagleboard XM (DM3730). But the processor is not the problem (i believe). Try the code I give and tell me if you found 15 cycles (10 for NEON part and 5 for ARM part).


    Here are some interesting things I've observed:

    1) If I add one or two pairs of nops in the middle I get the same speed (14 cycles for the loop). If I add a third pair the speed goes down to 13 cycles. With the fourth pair it goes back up to 14 cycles, and with every pair after that it adds 2 cycles. So, with 3 nop pairs I get no stalls in the NEON code, because there are 12 pairs of instructions (+1 cycle for fetch stall).

    2) If I change three or more of the vld1s to independent vext.8 I get 10 cycles, or full pairing. Same with vmovn, vswp, vrev16, vzip, and vuzp. So the bottleneck is not dual-issue, it's loads and stores.

    3) If I change to 64-bit loads instead of 128-bit I still get 14 cycles for the loop. So I don't think it's a bandwidth limitation.

    4) If I change to 64-bit or 128-bit store I get 21 cycles for the loop. However, here if I store to separate 16-byte addresses in a 64-byte block I get something like 15.5 cycles (this is with a cache-line aligned destination). This is probably due to coalescing filling a whole cache line in the write buffer, where otherwise the cache line has to be loaded. I tried "warming" the buffer by memcpying it to itself to make sure it was in L1 cache, but that didn't make a difference.

    5) If I change the vmul.f32s to vmla.f32 things get bad. If I start at a baseline of no-pairing I get the expected 9 cycles. Then pairing a single vmovn turns it into 12. And from there every new pair adds 4 cycles. I get the same cycles with vrecps.f32, and presumably will with the other chained pipeline instructions.

    So I guess the lessons are to not do too many loads/stores in a row, and that chained pipeline instructions hate being dual issued with anything for some reason. We should do some more testing to see if there are any other instructions that cause a big penalty over dual-issue like this.


    I do not understand the point 5 and how you get 9 cycles !!!

    I think the load process of NEON is very complex and it is not easy to really understand it.
    I've stopped to try to understand it because test context in never the same as realtime context.

    The best we can do for the moment is to use some guidelines:
    - don't use the same buffer for read and for write (when it's possible)
    - don't update data with the ARM while you're reading them with NEON (in a loop I mean). in general case avoid to access to the same datas with ARM and NEON load and store operations. You can do that only if the ARM and NEON are doing LOADs.
    - try to load (if it's possible) long time before using datas (there is enough registers to load the datas of the next iteration during the previous one).
    - try to write as soon as possible (that's to say as soon as the register are available for VSAVE).
    - use alignment if it's possible.
    - and now ;) don't read the same memory bloc with consecutive VLOAD

    It could be usefull to understand the real LOAD and STORE NEON process, but I think you could search for you all life without understanding it !

    Etienne.
Children
No data