Hi,I've been trying to get a good grasp of the variables associated with interrupt handling in the Cortex-M family. I've read "A Beginner’s Guide on Interrupt Latency - and Interrupt Latency of the Arm Cortex-M processors" and "Cortex-M for Beginners", and while those were both very useful, I'm still left with a couple of questions. Especially regarding Cortex-M7, since that's what I've been using.
Specifically, my biggest question is this: In "Cortex-M for Beginners" there's a table showing the necessary number of clock cycles to context switch to an interrupt handler for all Cortex-M processors. The entry for Cortex-M7 shows "Typically 12, worst case 14", but I can't find any information on what can cause the worst-case response.
From what I've gathered, this can't be caused by multi-cycle instructions or anything of that sort, so I'm wondering if this is a very edge-case scenario - or if it can happen regularly. Additionally, I'm wondering if the worst-case is implementation-dependent. I've only found two references to things that affect the worst-case interrupt latency in any ARM documentation. The first is when deep sleep is engaged, which can only occur on implementations with the optional Wakeup Interrupt Controller (section 2.5.3 in the "Cortex-M7 Devices Generic User Guide"). The other is when DIS-CRITAXIRUR is set in the Auxiliary Control Register (Disables critical AXI Read-Under-Read), which according to table 3.3 the Technical Reference Manual can improve the worst-case interrupt latency.
I'm basically wondering if I set up a Cortex-M7 without changing any of these settings, what is the worst-case interrupt latency? And is it possible to support zero-jitter interrupt latency in Cortex-M7? I've seen references to this functionality in CortexM0/M0+ (which, from what I gather, depends on whether the chip manufacturer chose to include that support in their design), but I can't find any such references for Cortex-M7. My understanding is that a good implementation with TCM will achieve the same thing - is that correct?Anyway. I'm not sure if anyone can help me answer this, but any pointers to documentation that might contain more information would be greatly appreciated. Cheers
A real time analysis based on cycles is IMHO error prone unless you write your application in 100% assembly and have control over every instruction.
Else, the analysis may be wrong just because you did change the compiler.
So, again IMHO, the WCET is important. Not a nano-second jitter (for example i.MXRT1064@600MHz: 2 cycle ~= 3.3ns).
I agree, I'm not trying to base any full analysis on the specified hardware based interrupt latency. I just want to separate the hardware latency from the software overhead.
Sure, but again, what does it matter if you assume a worst case latency of 14 cycles? The interrupt prologue is likely to take more time unless you can just work with the HW-saved registers.BTW: One reason for 12/14 difference (unchecked!) could be stack alignment.Also, I found this in the CM7 TRM:"To minimize interrupt latency, the processor can abandon the majority of multicycle instructions that are executing when an interrupt is recognized. The only exception is a load from Device or Strongly-ordered memory, or a shared store exclusive operation that starts on the AXI interface. "
"the only exception" and then they list three :-)
Thanks, those three reasons are very helpful!
Assuming 14 as the worst-case isn't bad, I just wanted to make sure that 14 can actually happen on the specific chip I'm using. With the new info you found in the TRM, I think it's safe to say this covers any CM7 implementation (as far as I can tell, Device Memory must exist in any implementation, don't you think? I read about it in section A3.5.5 in the ARMV7-M Architecture Reference Manual). I just want to avoid being technically incorrect when writing about this topic.
Simen Sørensen said:Device Memory must exist in any implementation
Yes. Plus you can easily "emulate" it with the MPU.