We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
I am developing a driver for a DMA bus master device, part of an SoC powered by a Cortex M7 CPU. Suppose I have two memory locations, x and y, which map to the same cache line, which is normal, write-back cacheable memory, and suppose the following sequence of events:1. Start with x = x1, y = y1, cache line invalid.2. CPU reads y3. DMA device sets x = x2, in memory4. CPU sets y = y25. CPU cleans the cache line.After 5. completes, from the point of view of the DMA device, x = ?I think the DMA will see x = x1, here is my reasoning:- When CPU reads y in 2., the cache line gets pulled in cache. It reads x = x1, y = y1, and is marked as valid.- The DMA then updates x in memory, but the change is not reflected in the cache line.- When the CPU sets y = y2, the cache line is marked as dirty.- When the CPU cleans the cache line, as it is dirty it gets written back to memory.- When it gets written back to memory, it reads x = x1, y = y2, thus overwriting thechange made by the DMA to x.Does that sound like a good reasoning?
x
y
x = x1, y = y1
x = x2
y = y2
x = ?
x = x1
x = x1, y = y2
Thank you for your reply. So assuming I keep descriptors in cacheable memory, if I want to invalidate / clean descriptors individually, I need to make sure each of them lies in a different cache line, and also make sure I don't overwrite any other data structure in the same cache line, also written by the DMA?
descriptors are (most often) not the size of a cache line, so it is better to place them in non-cache ("normal") memory. The pain of invalidating/cleaning cacheline because of DMA change it is not worth the effort.