Hi,
I am using iMX 8X which has 1 cluster of 4 Cortex-A35 cores, with DDR3L (DDR3-1866) with ECC enabled.
I performed some measurement for MEMCPY and MEMSET functions to have an estimate of the DDR bandwidth, with one cortex-A35 core running. Here are the best results I have:
- MEMSET: 6079 MB/s
- MEMCPY: 2081 MB/s
- MEMREAD: 2880 MB/s
The functions are based on NEON instructions with prefetch memory instructions (except MEMSET which has no prefetch memory instructions), and caches and MMU are active.
The idea here is to configure the core or the cluster components to get as close as possible to the theoretical bandwidth, which is 7464MB/s (DDR-1866, 32 bits), in order to fasten code execution from DDR for a normal application running on 1 Cortex-A35 core.
As the MEMSET measured bandwidth seems acceptable (81% of theoretical bandwidth), it would be surprising if read accesses were not optimizable.
According to read access latency of the used DDR chip (13 cycles) and write access latency (9 cycles), I would have expected a difference between MEMSET and MEMREAD functions, but not as much, especially because dur to MMU and caches activation, I would expect the controller to perform continuous accesses to the DDR, where the read latency and write latency impact is minimized.
Though I have already posted some questions about the DDR controller of the iMX 8X on NXP forum, I also tried different settings in the Cortex-A35 to try to optimize the read accesses, but I can't get significant improvements:
In some discussions on iMX forums, I also found that using 4 cores instead of 1 also enhance the bandwidth available, by 10%-20%.
Using the caches and MMU or not impacts directly the memory tests results (because cache lines are filled in background), and I am convinced that there are still certain things that I should understand to be able to configure the core correctly, but I can't find what.
Does anyone has information on:
Thanks,
Gael
Hello Gael,
1. Cacheability considerations for memory system throughput:
For non-cacheable memory regions, the merge write buffer on data path to memory will indeed combine requests to the same cachelines. For cacheable memory regions, it may prefer instead to issue individual write transactions to the cache system. Write merging works well for sequential accesses such as for memset.
Cortex A35 is based on ARMv8 architecture, and supports both Aarch64 and Aarch32 instructions. For optimal performance I guess you use the Aarch64 instructions, and thus the cacheline size is 64 Byte.
As Bastian said, the maximum merged write burst is therefore 64 Byte, with 64B alignment.
The overall performance for cacheable accesses depends on the data size, but it depends on the cache hit rate. If the accesses always hit in the cache, the latency is the one of the cache. If they miss in the cache, there will be the latency overhead due to accesses to back end memory.
2. Effect of disabling MMU:
Disabling MMU will actually make all your memory accesses to be strongly ordered.without any pipelining as only one access is issued at one time. Which explains minimal performance. This is not the same as having MMU enabled, memory attributes set as normal but with cache disabled.
3. L2 cache system
The "Early write response" and "Full line of Zero write" optimizations are performed by the L2 cache controller and/or the SCU, and operate on both L2 cache and back end memory. I never worked with Cortex A35 CPU, but it seems that as you mentioned that the Data Cache Zero by MVA can clear one cacheline in the L1 cache, and the rest of the cache system up to back end memory for the same cacheline if set properly (the Point of Coherence shall be set to L3).
4. Number of allowed pending loads
I am not familiar with coding with NEON instructions, but if you issue a VLDM, I guess it will be transferred to the load/store unit, which is limited by the number of possibly pending load and store requests. issued by the Load Store Unit (LSU) or what ARM seems to call the Data Cache Unit (DCU) of the L1 cache system. Maybe someone else can elaborate on that?
Florian