We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hi Experts,
I'm reading white paper for ARMv7 and ARMv8.
but when i reading cache part and memory re-ordering, i have silly questions.....
Suppose there are below instructions..
Core A:
STR R0, [Msg]
STR R1, [Something]
Core B:
DSB
LDR R1 [Something]
......
my questions are :
Cuz i really really beginner, maybe that situation is wrong itself....
But, i hope your merciful answer..
Thx.
Thanks Peter.
But, Please confirm my thinking is valid or not..
Suppose Core A try to store data on memory. but this core maybe write data on cache (it's already supposed to be able to use cache).
Next using this store data, some operation must be processed. but Core is changed to other core (Core B).
But Core B don't have same cache line compared to Core A ( SCU doesn't have any effects maybe..)
In this situation Core B might refer not updated data and operation will be invalid.
So, for preventing this situation. if i use some instructions like DCCIMVAC, DCCMVAC, Does it guarantee Core B can refer updated data by Core A?
Maybe this question will be silly question.. but hope merciful answers. Thanks.
Hello,
As an aside / in addition to what Pete said, I recommend you take a look at the ARMv8-A Architecture Reference Manual's provided examples for barriers, such as in Section J7.6.1 (ARM DDI 0487A.f) shows that you should have a DMB barrier on Core A between the two stores, and a DMB barrier on Core B between the flag-checking loop and the load from the mailbox.
Hi levi,
In a coherent multi-core system then data is always kept in sync between the caches, provided the pages are marked as shared in the page tables, without any need for explict cache maintenance operations in the code. If you required explict cache maintenance then SMP operating systems just wouldn't work - it has to be transparent and automatic.
Barriers are a totally different aspect, not really related to cache coherency at all. ARM has a weakly ordered memory model - instructions in one thread are allowed to out of order complete with respect to each other in many cases, unless there is an address dependency. For example:
STR r0, [msg] STR r0, [msg_written]
If msg and event are different addresses, then without a barrier we could write the msg_written flag into memory before writing the message itself. If another core polls that msg_written address, and tries to read msg then it will get the wrong data. Barriers are effectively restrictions on the instruction completion to ensure that things happen in the right order.
STR r0, [msg] DMB STR r0, [msg_written]
The DMB guarantees that the msg write is committed before the msg_written write is committed. Note that this is nothing to do with the cache visibility to other processors - that's all magically handled by the SMP hardware - we just need to guarantee that the two writes into the cache are ordered correctly to get the behaviour the programmer intended.
HTH,
Pete