I am using an Atmel/Microchip ATSAME70 Cortex M7 processor and was working on getting Ethernet GMAC driver working. The GMAC uses an internal DMA engine to transfer packets to SRAM. I noticed that I was getting some HRESP errors with the DMA transferring data and the peripheral driver was not working. I disabled the data cache and the driver works.
The code polls the SRAM locations to see if a DMA has completed so I tried leaving data cache on and invalidating the cache before reading the SRAM but that did not fix problem. I was wondering if anyone knew of any problems using the data cache and DMAs in peripherals like this?
Note every time I contact Microchip about such problems they inform me that they never test/run code with caching enabled, so they never see any problems.
I would not recommend to poll the buffer or the ring-buffer descriptors. Rather poll the interrupt register of the GMAC.
Regarding you below question: Today's chips are monsters, so there are thousands of pages to describe them. I have to deal with a lot of different vendors but I cannot say that one of them has the _best_ manual. Same goes for examples/driver libraries from the manufactures. None is complete, most are over-complicated and many have bugs. In short, you cannot pick a SoC because of the drivers/manuals.
Yea, I keep hoping one will take the time to do good datasheets/drivers. I spent days debugging the GMAC on the SAME70 only to find out the CMSIS pack provided my Microchip had the register addresses wrong, hence could not get drivers to work. To add insult to injury they told me that the bug had been reported already, and of course they had not fixed the 2 year old bug.
The issue I am finding with SoCs is that most bugs are caused because of the poor datasheet and drivers, that is the example drivers do not check and handle all errors situations and are full of bugs. Then the datasheets are often wrong, for example Atmel datasheet for one processor indicates the internal pull ups can be used when pins are connected to peripherals, which is incorrect. So most of my time is spent writing drivers and deciphering datasheets. So once I learn a processor I try to stick with it for a few years.
My caching issue with the GMAC was also the write cache, specifically when sending a packet you write data to SRAM and then kick off DMA, so I had to flush the cache before kicking off the DMA.
I am debating if the best way to handle DMA memory caching is to make a non-cached section of SRAM using linker sections or if it is faster/better to flush the cache. I assume that the non-cached SRAM might be faster than constantly flushing and invalidating the cache, as you invalidate more memory than just the DMA memory space.
My experience is that it is better to flush or invalidate the range of the cache of the DMA buffer then placing those into non-cachable area.
You only have to make sure, that your buffer is aligened to the cache-line size (32 bytes on the CM7 IIRC).
But then, SCIOPTA uses message passing, thus the DMA writes directly into the TCP/IP framebuffer.
So I finally got the driver to work with the data cache enabled. I had to go through driver and find all the places the driver was transfering RAM buffers and make sure the data cache was flushed correctly before each one. I also noticed that some of the buffers were not explicitly aligned to correct byte boundaries, but as luck had it the compiler did align correctly. Here again the datasheet was unclear about byte boundaries and appeared to have contradictions, to be safe I aligned all the buffers.
Which datasheet? The core or the SoC one? Most often the SoC manual do not contain much info about the core.
The SoC (ATSAME70)...
Chapter 15.1 clearly tells the cache line size. ;-)
Yes... The ATSAME70 GMAC driver did not align the buffers for the GMAC DMA engine, even though the datasheet had mentioned a requirement that they were aligned.
View all questions in Cortex-M / M-Profile forum