Hi all,
I did some of the investigation based on comparison of FPU based algorithms on CM4 and CM7 cores. All the code/data were placed into the single cycle memory with full utilization of modified / true Harvard architecture, it means:
- on CM4 - code in SRAM accesible via CODE bus, data in SRAM accesible via SYSTEM bus with fully utilized modified Harvard architecture
- on CM7 - code in I-TCM memory, data in DTCM memory
Most of the code (instructions) are floating point (99%), it means thay are not interleaved with integer instructions (well this is most probably caused by compiler - to be honest I have check the assembly for both codes CM4 / CM7 and they looked the same). The code mostly contains general math calculations mul, mac, sqrt, div + load / store, all in floating point. The result I am getting are confusing me. Cortex M4 shows even better results that Cortex M7.
Questions:
- are the differencies caused by cores pipelines? not sure how "dynamic branch prediction" works, if it is really posible to get branch in single cycle or it is required to flush whole pipeline (6 cycles) in a case of floating point pipeline on CM7
- what are the best practices in coding to get the best from CM7 over CM4 in floating point meaning? (not sure if the compilers are now in best condition regarding to CM7)
thanks in advance.
regards
Rastislav
As a follow-up, if FP parallel load/MACs are not possible, then what is next best plan?
From my fumbling around, it appears that doing a burst load of some size (<1 cycle/load) followed by a string of single cycle VFMAs, gives the smallest cycle count I've seen.
If the above burst method is the best, given the N-port FPU load structure, what is the optimal way to load FP regs? (instruction to use, number of regs in burst, alignment effects, etc) A concrete asm code example of the optimal load/MAC method would be much appreciated.
Thanks, Chris
I can't spend any more time on this. I haven't received any feedback. I always like to help the next person, so here's my full report out of discoveries and conclusions. I would still very much like an official ARM confirmation of my findings. You have great CPU designs; please up your game in terms of detailed optimizing information -- let's all get better together.
- The only FPU advantage of M7 over M4 is a single cycle MAC.
- Various web sources (ARM, others) talk of "2 FPU ALU pipes". I've seen no evidence from experiments that this is true. My guess is that it is communication confusion -- my swag: there ARE 2 paths that are creatively designed to enable a 1 cycle MAC; it is useless is all other regards. I don't like this conclusion and would LOVE to hear of tricks otherwise as to how I can parallelize anything.
- KEY learning: due to pipelining CAN'T use just 2 intermediate calculation regs for some algorithms, need 4
// VFMA.F32 S0,S31,S8 195 MHz (S0=Real, S1=Imag)
// VFMA.F32 S1,S30,S9
// VFMA.F32 S0,S29,S10
// VFMA.F32 S1,S28,S11
//
// VFMA.F32 S0,S31,S8 317 MHz (S0, S1=Real [sum later], S2, S3=Imag [sum later] )
// VFMA.F32 S2,S29,S10
// VFMA.F32 S3,S28,S11
- Simple alternating load/MAC as suggested by literature does quite poorly. Bursts of loads do much better -- do that. My hunch is that I assumed (my bad, read the details) that the marketing of parallel load/MACs applied equally to FPU and Integer math. From my testing, it appears this is only true for Int. What a shame. Again, I'd love to be wrong. Please show me the trick.
- There is an art to reg bursting. Study your algorithm, really break it down to its essence. The FPU in general is MUCH preferred to the Int ALU (only 13-14 regs) because we have 32 SP regs, 32! Huge resource. Load/stores are trouble. Move what you can into regs, keep them there, run lots of cycles. For the remaining data that must be loaded on the fly, do this: identify the bare minimum numbers of regs needed for intermediate calculations, allocate the rest as a "load buffer" that is filled with a single VLDM (Ex: VLDM R1!, {S4-S15} ). The code should have a pattern, one VLDM, lots of VFMAs, repeat. That's it. That's the only trick I was able to find. This trick was also my only one in M4 land.
- Cache. TCM is MUCH preferred but, for various reasons, sometimes it can't be used. ARM: push your licensees to provide better selection of TCM configurations, don't force me to waste huge amounts, let me have highly granular config -- I needed just a bit of DTCM only (little benefit for ITCM) and can't select it on my chip. Ok, back to cache. You're fighting 16 KB of DCache, understand that and embrace it. If your twiddle/etc tables are bigger than that, your code will be horribly slow. The trick is simple. Find a way to break your algorithm into bursts so that a portion of the table WILL fit in cache, first set of math is slow (load cache) then rest is fast. For my algorithm, I used 16, 1 slow, 15 fast, had to break my processing into 3 sections so a part of table would fully fit in cache (most of the time, break into into more sections if it misses fitting too often [interrupts, task switching, etc] ).
- Load/store density reduction. This is a small fine tuning item but I was able to save a few cycles and it only takes a few minutes of fiddling. After all my heavy duty MAC math in the main inner loop, there's a bottom section of the obligatory load/store, update type. I was able to rearrange items (that didn't matter) so that no load/store was next to another one -- it's faster, but the code is less logical, no big deal, put a long comment, disks are free. ARM: again, we need data. What's going on here? Just share the inner workings so we know the silicon with which we dance. It's not super secret IP, almost all CPU IP shops must be dealing with this. I don't see any competitive loss. It could even become an Advantage by showing that you are willing to help your customers wring every last cycle out of your parts. Less preferred path but I'll take it, let's do an NDA, then share the tricks -- but this is much less preferred because all the really innovative small shops (that turn into big ones) will never do this because lawyers are expensive and they don't have time to waste on NDAs which can take months to execute.
Happy optimizing, Chris