Arm Community
Site
Search
User
Site
Search
User
Groups
Research Collaboration and Enablement
DesignStart
Education Hub
Innovation
Open Source Software and Platforms
Forums
AI and ML forum
Architectures and Processors forum
Arm Development Platforms forum
Arm Development Studio forum
Arm Virtual Hardware forum
Automotive forum
Compilers and Libraries forum
Graphics, Gaming, and VR forum
High Performance Computing (HPC) forum
Infrastructure Solutions forum
Internet of Things (IoT) forum
Keil forum
Morello Forum
Operating Systems forum
SoC Design and Simulation forum
中文社区论区
Blogs
AI and ML blog
Announcements
Architectures and Processors blog
Automotive blog
Graphics, Gaming, and VR blog
High Performance Computing (HPC) blog
Infrastructure Solutions blog
Innovation blog
Internet of Things (IoT) blog
Operating Systems blog
Research Articles
SoC Design and Simulation blog
Tools, Software and IDEs blog
中文社区博客
Support
Arm Support Services
Documentation
Downloads
Training
Arm Approved program
Arm Design Reviews
Community Help
More
Cancel
Support forums
Architectures and Processors forum
Cortex A8 Instruction Cycle Timing
Jump...
Cancel
State
Not Answered
Locked
Locked
Replies
90 replies
Subscribers
346 subscribers
Views
77919 views
Users
0 members are here
Cortex-A
Options
Share
More actions
Cancel
Related
How was your experience today?
This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion
Cortex A8 Instruction Cycle Timing
Offline
barney vardanyan
over 9 years ago
Note: This was originally posted on 17th March 2011 at
http://forums.arm.com
Hi) sorry for bad English
I need to count latency for two instruction, and all I have is the arm cortex A 8 documantation(charter 16) !
but I have no idea how can do this work using that documantation(
Parents
0
Offline
Gilead Kutnick
over 9 years ago
Note: This was originally posted on 10th August 2011 at
http://forums.arm.com
Yeah, I may be misremembering the queue length.. I'll have to check again later today when I have access to the description.
I thought I remembered issuing on both first and last cycle but I'm having trouble doing it now too. I'm also having trouble getting the loop you mentioned earlier down to 10 cycles. It looks like it's taking at least 12. The entire loop is taking 14 - since there is stalling, it's difficult to tell how much, if any, is overlapping the 2 cycles of integer loop overhead. You would think that at least one cycle would be overlapped since it's purely a fetch cycle.
The number of cycles stays the same for me regardless of if I load to different registers or using different base registers with the same arrangement as in your example. Maybe we're using different versions of Cortex-A8? I'm using OMAP3530, how about you?
Here are some interesting things I've observed:
1) If I add one or two pairs of nops in the middle I get the same speed (14 cycles for the loop). If I add a third pair the speed goes down to 13 cycles. With the fourth pair it goes back up to 14 cycles, and with every pair after that it adds 2 cycles. So, with 3 nop pairs I get no stalls in the NEON code, because there are 12 pairs of instructions (+1 cycle for fetch stall).
2) If I change three or more of the vld1s to independent vext.8 I get 10 cycles, or full pairing. Same with vmovn, vswp, vrev16, vzip, and vuzp. So the bottleneck is not dual-issue, it's loads and stores.
3) If I change to 64-bit loads instead of 128-bit I still get 14 cycles for the loop. So I don't think it's a bandwidth limitation.
4) If I change to 64-bit or 128-bit store I get 21 cycles for the loop. However, here if I store to separate 16-byte addresses in a 64-byte block I get something like 15.5 cycles (this is with a cache-line aligned destination). This is probably due to coalescing filling a whole cache line in the write buffer, where otherwise the cache line has to be loaded. I tried "warming" the buffer by memcpying it to itself to make sure it was in L1 cache, but that didn't make a difference.
5) If I change the vmul.f32s to vmla.f32 things get bad. If I start at a baseline of no-pairing I get the expected 9 cycles. Then pairing a single vmovn turns it into 12. And from there every new pair adds 4 cycles. I get the same cycles with vrecps.f32, and presumably will with the other chained pipeline instructions.
So I guess the lessons are to not do too many loads/stores in a row, and that chained pipeline instructions hate being dual issued with anything for some reason. We should do some more testing to see if there are any other instructions that cause a big penalty over dual-issue like this.
Cancel
Up
0
Down
Cancel
Reply
0
Offline
Gilead Kutnick
over 9 years ago
Note: This was originally posted on 10th August 2011 at
http://forums.arm.com
Yeah, I may be misremembering the queue length.. I'll have to check again later today when I have access to the description.
I thought I remembered issuing on both first and last cycle but I'm having trouble doing it now too. I'm also having trouble getting the loop you mentioned earlier down to 10 cycles. It looks like it's taking at least 12. The entire loop is taking 14 - since there is stalling, it's difficult to tell how much, if any, is overlapping the 2 cycles of integer loop overhead. You would think that at least one cycle would be overlapped since it's purely a fetch cycle.
The number of cycles stays the same for me regardless of if I load to different registers or using different base registers with the same arrangement as in your example. Maybe we're using different versions of Cortex-A8? I'm using OMAP3530, how about you?
Here are some interesting things I've observed:
1) If I add one or two pairs of nops in the middle I get the same speed (14 cycles for the loop). If I add a third pair the speed goes down to 13 cycles. With the fourth pair it goes back up to 14 cycles, and with every pair after that it adds 2 cycles. So, with 3 nop pairs I get no stalls in the NEON code, because there are 12 pairs of instructions (+1 cycle for fetch stall).
2) If I change three or more of the vld1s to independent vext.8 I get 10 cycles, or full pairing. Same with vmovn, vswp, vrev16, vzip, and vuzp. So the bottleneck is not dual-issue, it's loads and stores.
3) If I change to 64-bit loads instead of 128-bit I still get 14 cycles for the loop. So I don't think it's a bandwidth limitation.
4) If I change to 64-bit or 128-bit store I get 21 cycles for the loop. However, here if I store to separate 16-byte addresses in a 64-byte block I get something like 15.5 cycles (this is with a cache-line aligned destination). This is probably due to coalescing filling a whole cache line in the write buffer, where otherwise the cache line has to be loaded. I tried "warming" the buffer by memcpying it to itself to make sure it was in L1 cache, but that didn't make a difference.
5) If I change the vmul.f32s to vmla.f32 things get bad. If I start at a baseline of no-pairing I get the expected 9 cycles. Then pairing a single vmovn turns it into 12. And from there every new pair adds 4 cycles. I get the same cycles with vrecps.f32, and presumably will with the other chained pipeline instructions.
So I guess the lessons are to not do too many loads/stores in a row, and that chained pipeline instructions hate being dual issued with anything for some reason. We should do some more testing to see if there are any other instructions that cause a big penalty over dual-issue like this.
Cancel
Up
0
Down
Cancel
Children
No data