I am working on an Arria10 SoC which has dual ARM Cortex-A9 MPCore. And I work on bare-metal environment with Intel SoCFPGA’s hardware library(HwLib).
On the shared SDRAM, I am planning to have dedicated memory regions for each core, and a shared non-cacheable memory area. And the FPGA does not get involved in the SDRAM. But my attemps to initialize MMU fails.
Here is what I have done:
For CPU0:
static void mmu_init(void) { uint32_t *ttb1 = NULL; /* Populate the page table with sections (1 MiB regions). */ ALT_MMU_MEM_REGION_t regions[] = { /* CPU0 Memory area: 256 MB */ { .va = (void *)0x00000000, .pa = (void *)0x00000000, .size = 0x10000000, .access = ALT_MMU_AP_FULL_ACCESS, .attributes = ALT_MMU_ATTR_WBA, .shareable = ALT_MMU_TTB_S_NON_SHAREABLE, .execute = ALT_MMU_TTB_XN_DISABLE, .security = ALT_MMU_TTB_NS_SECURE }, /* CPU1 Memory area: 512 MB */ /* CPU0 MUST NOT ACCESS THIS AREA */ { .va = (void *)0x10000000, .pa = (void *)0x10000000, .size = 0x20000000, .access = ALT_MMU_AP_NO_ACCESS, .attributes = ALT_MMU_ATTR_FAULT, .shareable = ALT_MMU_TTB_S_NON_SHAREABLE, .execute = ALT_MMU_TTB_XN_DISABLE, .security = ALT_MMU_TTB_NS_SECURE }, /* Shared Memory area: 256 MB */ { .va = (void *)0x30000000, .pa = (void *)0x30000000, .size = 0x10000000, .access = ALT_MMU_AP_FULL_ACCESS, .attributes = ALT_MMU_ATTR_DEVICE_NS, .shareable = ALT_MMU_TTB_S_NON_SHAREABLE, .execute = ALT_MMU_TTB_XN_ENABLE, .security = ALT_MMU_TTB_NS_SECURE }, /* Device area: Everything else */ { .va = (void *)0x40000000, .pa = (void *)0x40000000, .size = 0xc0000000, .access = ALT_MMU_AP_FULL_ACCESS, .attributes = ALT_MMU_ATTR_DEVICE_NS, .shareable = ALT_MMU_TTB_S_NON_SHAREABLE, .execute = ALT_MMU_TTB_XN_ENABLE, .security = ALT_MMU_TTB_NS_SECURE } }; assert(ALT_E_SUCCESS == alt_mmu_init()); assert(alt_mmu_va_space_storage_required(regions, ARRAY_SIZE(regions)) <= sizeof(alt_pt_storage)); assert(ALT_E_SUCCESS == alt_mmu_va_space_create(&ttb1, regions, ARRAY_SIZE(regions), alt_pt_alloc, alt_pt_storage)); assert(ALT_E_SUCCESS == alt_mmu_va_space_enable(ttb1)); }
For CPU1: Same initialization with CPU0 and CPU1’s .access and .attributes variables swapped.
.access
.attributes
This configuration results a system hang at alt_mmu_va_space_enable() function.
alt_mmu_va_space_enable()
Since each CPU has its own MMU, I think this configuration is necessary for memory safety. I don’t see any mechanism that handles intercore MMU and since each core has its own MMU, I think that this is the only way to implement a safe shared memory. Is my method and inference right? If so, what am I doing wrong? Thanks in advance.
Do you set DACR also? I see no "domain" in the initialization. Try setting DACR on a first step to all 1s (which disable AP).
I think the alt_mmu_va_space_enable() function does that. It sets all of the 16 domains to 1(ALT_MMU_DAP_CLIENT) by default. Should I change it?
42Bastian Schick Hello again. I have been working on this for weeks now, without a success. And I couldn't find a single example/tutorial on Bare-Metal MMU for MPCore systems that can answer my questions. Sadly, I don't know anybody experienced on this subject so I cannot even get a proper roadmap. I have some (probably very basic) questions on the subject. Can you help me?
I hope you understand, that I cannot really help you beyond the point as I did. I do not know anything about the Intel software you are using and I'd expect some FAE from Intel should be able to help you with this.
From my perspective the setup values look correct. Did you check it with a debugger if registers are set correctly as you expect?
I was not expecting any help on the software implementation side. I guess the code piece that is written using the HwLib in the question caused you to think that. Sorry if I couldn't expressed myself properly, and thank you for your kind answer.
First of all, I can initialize MMU now. In my CPU1 code scatter file, I had arranged the init space as 0x01000000, but in MMU configuration, I had marked this region as "NO ACCESS". Giving CPU1 some space with 0x01000000 solved the problem. But I think my problem here is more on the 'concept' side of the subject. Because now I get runtime errors both on normal memory side and device are side (my serial terminal goes nuts sometimes, so I guess I am breaking something on device area) The questions I was talking about the previous post were: "Scatter file's relationship with MMU configuration", "using malloc() in a system with MMU enabled", "fragmented identity mapping" and so on... Those were the problems I had while testing the MMU configuration, and I don't even know if I'm testing it the right way.
For example, to test fragmented MMU config (areas with the configurations: FULL ACCESS - NO ACCESS - FULL ACCESS, in this order), I am calling malloc() with 1 MB in a while loop to see which regions I can allocate. But after the first accessable area, the code gives an abort. I would expect it to see that the next accessible memory block, and continue allocating space from there...
I know it's hard to answer all of them in this medium, and it is clearly asking too much to say the least. So even a source suggestion or a little clue is really appreciated. Once again, thank you 42Bastian Schick for your interest.
Are you sure the heap is correctly setup? I think it is easier to use just a pointer to test the MMU configuration instead of malloc(). A failing malloc() is no indication of a wrong MMU setup.Is the code of CPU1 in the correct area? A common fault is to "cut the branch you are sitting on". Means, you make the memory NX your code is in.