This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Memory barrier (DSB, DMB). Does they guarantee writing data on cache to memory?

Hi Experts,

I'm reading white paper for ARMv7 and ARMv8.

but when i reading cache part and memory re-ordering, i have silly questions.....

Suppose there are below instructions..

 

Core A:

     STR R0, [Msg]

     STR R1, [Something]

Core B:

     DSB

     LDR R1 [Something] 

     ......

my questions are :

  • if Core A stores R1's data in its cache memory.. then does DSB guarantees R1's data also to be written on memory?
  • if first question's answer is not, Should i flush the cache for writing data on memory? like DCCSW

Cuz i really really beginner, maybe that situation is wrong itself....

But, i hope your merciful answer..

Thx.

Parents
  • Hi levi,

    In a coherent multi-core system then data is always kept in sync between the caches, provided the pages are marked as shared in the page tables, without any need for explict cache maintenance operations in the code. If you required explict cache maintenance then SMP operating systems just wouldn't work - it has to be transparent and automatic.

    Barriers are a totally different aspect, not really related to cache coherency at all. ARM has a weakly ordered memory model - instructions in one thread are allowed to out of order complete with respect to each other in many cases, unless there is an address dependency. For example:

    STR r0, [msg]
    STR r0, [msg_written]
    
    
    

    If msg and event are different addresses, then without a barrier we could write the msg_written flag into memory before writing the message itself. If another core polls that msg_written address, and tries to read msg then it will get the wrong data. Barriers are effectively restrictions on the instruction completion to ensure that things happen in the right order.

    STR r0, [msg]
    DMB
    STR r0, [msg_written]
    
    
    

    The DMB guarantees that the msg write is committed before the msg_written write is committed. Note that this is nothing to do with the cache visibility to other processors - that's all magically handled by the SMP hardware - we just need to guarantee that the two writes into the cache are ordered correctly to get the behaviour the programmer intended.

    HTH,

    Pete

Reply
  • Hi levi,

    In a coherent multi-core system then data is always kept in sync between the caches, provided the pages are marked as shared in the page tables, without any need for explict cache maintenance operations in the code. If you required explict cache maintenance then SMP operating systems just wouldn't work - it has to be transparent and automatic.

    Barriers are a totally different aspect, not really related to cache coherency at all. ARM has a weakly ordered memory model - instructions in one thread are allowed to out of order complete with respect to each other in many cases, unless there is an address dependency. For example:

    STR r0, [msg]
    STR r0, [msg_written]
    
    
    

    If msg and event are different addresses, then without a barrier we could write the msg_written flag into memory before writing the message itself. If another core polls that msg_written address, and tries to read msg then it will get the wrong data. Barriers are effectively restrictions on the instruction completion to ensure that things happen in the right order.

    STR r0, [msg]
    DMB
    STR r0, [msg_written]
    
    
    

    The DMB guarantees that the msg write is committed before the msg_written write is committed. Note that this is nothing to do with the cache visibility to other processors - that's all magically handled by the SMP hardware - we just need to guarantee that the two writes into the cache are ordered correctly to get the behaviour the programmer intended.

    HTH,

    Pete

Children
No data