I am using the following 3 assembly sections to read a memory mapped i/o to multiple registers and to read same i/o and save it ram respectively , on an ARM Cortex M3. I want to know exactly how many CPU cycles this would take to complete. Or in other words how fast am I reading the register.
1) read to and save to memory: Can LDR-STR=LDR-STR be tightly pipelined (With Address Phase of one instruction overlapping Data Phase of previous instruction), in which case the following will take only 9 cycles ?
486: 781a ldrb r2, [r3, #0]
488: 7002 strb r2, [r0, #0]
48a: 781a ldrb r2, [r3, #0]
48c: 7042 strb r2, [r0, #1]
48e: 781a ldrb r2, [r3, #0]
490: 7082 strb r2, [r0, #2]
492: 781a ldrb r2, [r3, #0]
494: 70c2 strb r2, [r0, #3]
2) read to multiple registers: I am assuming these instructions take 5 cycles.
48a: 781a ldrb r4, [r3, #0]
48e: 781a ldrb r5, [r3, #0]
492: 781a ldrb r6, [r3, #0]
I appreciate any insight you can provide.
Thanks,
This may depend on more than one thing. I think jyiu might be able to give you a more complete answer than I can provide.
I think the I/O timing may depend on the vendor's implementation.
As far as I remember, the instruction alignment is important.
If you use any 32-bit load or store instructions (eg. ldrb.w or strb.w instead of ldrb.n or strb.n), then make sure the instructions are aligned on a 4-byte boundary.
If an instruction is not aligned on a 4-byte boundary, I think you will not get the expected results.
Thus ...
If you're using only low registers (r0-r7), then you can use 16-bit instructions. If you're using a high register (r8-r15), then you need to use ldrb.w / strb.w instead.
The assembler automatically selects the necessary instruction size, but you may explicitly add the .w or .n suffix.
To make sure your instructions are aligned on a 4-byte boundary, you can use ...
.align 2
... when using the GNU Assembler. Note 2 does not mean 2-byte alignment, it means (1 << 2) byte alignment.
Thus I recommend that you solely use ldrb.w / strb.w in case any of your load or store instructions contain high registers, because aligning the instructions will insert a NOP, which usually cost you 1 clock cycle, so your timing will be affected.
Other things you should know: Some devices have limits on their GPIO speeds. Some devices have high-speed GPIO pins, which follow the CPU speed, thus you have nothing to worry about. Some devices, such as STM32 devices can have the GPIO pin speed configured (you can choose between low, mid, high and very high speeds).
Also, if you're using bit-bang, make sure no interrupts can disturb you while you're reading/writing - but you probably know that already.
(If you have a dual-core configuration, then one core might also affect the timing of the other core; I believe this is due to memory read/write access; but as you're using a Cortex-M3, you're likely not using a dual core configuration).
Thank you for giving a very detailed explanation. I knew there was way little background information in my original post. Thank you for pointing out the aspects about the system that would be relevant in this computation. Actually my system:
1) has no interrupts enabled
2) has fast gpio which i am reading from , operates at cpu clock
3) only r0-r7 are used in these ldr/str instructions and hence all are 16 bit thumb instructions,
I did some experiments since my post; I repeated the 1st set of instructions in my post, the load/str pairs (all of which are 16 bit instructions) 32 times and took some measurements- I wanted to confirm that the Address/Data phases are being pipelined as stated in arm cortex technical reference ( which says " LDR R0,[R1,R2]; STR R0,[R3,#20] - normally three cycles total " & "Neighboring load and store single instructions can pipeline their address and data phases. This enables these instructions to complete in a single execution cycle.")
LDR R0,[R1,R2]; STR R0,[R3,#20]
However, the ldrb/strb pair executed 32 times took 128 cycles as opposed to 64 ( when measured using sys_tick) - taking 2 cycles per instruction. I even switched to using multiple registers (r0-r7) for the ldrb/strb pair, instead of using same register, just in case the use of same register was causing any stalls ( though it did not seem likely since the register r2 used in ldrb/strb was not used in computing address).
Also, on another note, when I took same measurements on the 2nd set of instructions, the strb-strb-strb-.. , surprisingly it took only one cycle per 'strb' instruction, thus confirming that Address and Data phases of 'strb-strb-strb.' are getting pipelined.
I am now confused why the latter behaves as expected/stated in the manual but the former doesn't.
Thanks again ,
Hello,
I would like to confirm the 2nd set is 'strb-strb-strb-'.
Isn't it 'ldrb-ldrb-ldrb'?
If it is right, I guess the reason of the unexpected behavior would be the destination (SRAM) latency.
Because the source latency of 1 cycle due to the Fast GPIO, the series of the Fast GPIO accesses can be executed every one cycle.
I would like to propose an experiment that the destination of the 1st case would be located at the Fast GPIO.
In this case, I guess the behavior of 'ldrb-strb' would match with your expectation.
Best regards,
Yasuhiko Koumoto,
I understand your confusion, because I do not know the exact cause myself!
SysTick might not be very accurate; please see this post: cortex-M3 pipelinging of consecutive LDR instructions.
Another thing that could affect the execution timing, is whether you run your code from RAM or Flash memory.
If you have the ART accelerator, you don't need to worry, but if not, you might want to execute the code from internal SRAM.
Now, back to the ldrb/strb pipeline ... It may help to experiment a little:
First thing to try is to use .align 2, then add the suffix .w to all ldrb and strb instructions. -Just in case it matters.
Assuming it didn't change anything, try the following:
Instead of pointing r3 and r0 to GPIO space, try pointing both to SRAM; preferrably a different SRAM block than the one you're executing code from (in case you execute code from SRAM).
If the results differ, then the GPIO registers may be causing the delay (but there's no guarantee that this is the case, because there's also a data cache, which would help when loading from SRAM).
Regarding your first question, whether or not ldrb:strb:ldrb:strb can be tightly pipelined; I think it can not.
As I understand it, nothing can be pipelined after STR.
The first LDR instruction takes 2 clock cycles, the next LDR instruction should take only one instruction (if it's pipelined).
STR rS,[rB,#imm] should always take 1 clock cycle.
Thus I would expect LDR:STR to take 3 clock cycles, not 4.
Note that the examples in the manual always ends with a STR instruction.
Another test to try, is the following:
ldrb r2,[r3,#0]
strb r1,[r0,#0]
ldrb r1,[r3,#0]
strb r2,[r0,#1]
strb r1,[r0,#2]
strb r2,[r0,#3]
-Thus you're not writing into a register right before reading it. This should not make any changes according to the manual, but it might be good to get it confirmed.
If you're reading only a single bit, then you'll have the option of saving the result in a register by using the BFI instruction.
Normally when doing this, it's best to read only the low bit(s).
Thus if reading only 2 bits from the port each time, you might be able to get away with ...
.set pos,30
.rept 16
ldr r2,[r3,#0]
bfi r1,r2,#pos,#2
.set pos,pos-2
.endr
This should take maximum 3 clock cycles between each bit pair. Thus the fewer number of bits you're sampling, the longer you can sample; spreading the storage over several registers.
Unfortunately, if you need to take a lot of samples, you'll run out of registers.
View all questions in Cortex-M / M-Profile forum