Hello Forum,
I am using a cluster of 4 x A53 and two of the cores are assigned to run Linux, and the other two a custom application. I have reserved a memory region to the Linux that is not mapped. Some of it is used to run the custom OS and the rest of it shared buffer to pass/get data to Linux. I've mapped the share memory to the Linux and I try to achieve cache shareability between the Linux and custom OS cores, but I don't see it happening. Linux is on EL1 and my custom OS is on EL3.
The custom OS is fixing its EL3 MMU and all the other necessary attributes on the page table, inner and outer shareable. However, when I have the cache enabled in the shared memory region , I see that the memory is not coherent between cores. If I disable cache then I can see valid data from the other end.
In order to understand a bit better what is happening, I have split the shared memory to two regions, one with cache enabled (inner/outer shared) and one no cached. I have a small test app that writes a small region of cached memory with some test vectors and then I communicate with the custom OS via non cached memory to read the specific block of data written by the Linux test app. The RTOS reports back via the non cached region if the data are valid in the cached shared memory.
In the first run of my test, the RT core reports back that it read the test vector correctly, if I run it again it fails, if I run it after 2 minutes passes.
I am not sure what may be wrong here. Is it possible EL1 and EL3 to have memory region that is shared and is cachable
Any kind of advice will be quite helpful.
Thank you
Andreas
I have changed the NSTable bit but it does not help.
The problem comes up when I set TLB block entry to point to a MAIR index that has use of cache Inner/Outer WBWA. When I use index where the cache is disabled - I can access the memory from Linux. It's pure cache coherency problem.
Thanks. managed to get it going. The Linux is running EL1 accessing non secured shared memory region. EL3 has to be configured to access non secured. I didn't have to use NSTable, it was always 0.
The descriptor I used on EL3 was 0x0060000000000E71
NSBit = 1
AP = 01 (Read/write)
MAIR Register:
/* Set up memory attributes. * This equates to: * 0 = 0000 0000 = Device, nGnRnE * 1 = 0000 0100 = Device, nGnRE * 2 = 0000 1100 = Device, GRE * 3 = 0100 0100 = Normal, outer non-cacheable, inner non-cacheable. * 4 = 1111 1111 = Normal, outer write-back non-transient, inner write-back non-transient. <<<<<<<- * 5 = 1011 1011 = Normal, outer write-through non-transient, inner write-through non-transient. */ldr x1, =0x0000BBFF440C0400msr MAIR_EL3, x1
The NS bit in the final page table entry controls accesses to the memory mapped by the page table entry. So, your final setup looks correct to me.
The NStable controls whether the table walk itself switches to non-secure (Normal World). That allows you to graft a portion of the page tree supplied by the Normal World to an otherwise-secure page tree in the Secure World. The subsequent table walks will occur in the Normal World, and depending on how your environment is set up, that may cause the page walk to terminate, or to get an errant mapping.
That likely explains why NStable did not do what you expect.