Hi, Wonder if anyone could help me. I’ve got an SD card related problem. My project builds under uVisionV4.73.0.0, C Compiler Armcc.Exe V5.03.0.76, device = STM32F427II. My target platform has a native SD card device (2GB capacity and FAT file structure) for which we’re using Keil’s RL-FLashFS i.e. SDIO_STM32F4xx.c , File_Config,c (Rev V4.70). My target platform is a custom hardware design.
The problem The SD card becomes corrupt however I can still read and write files to it from my target platform perfectly well and finit() returns 0 indicating the card is in good working order however:
- ffree() takes over 1 second to complete whereas normally it completes in approximately 70mSec on a card that hasn’t been corrupted (this very slow response prevents my target booting which is how I discovered the card corruption).
- My Windows PC reports 700+ MB of used space on the card when the files on the card add up to less than 10 MB.
- Windows CHKDSK reports the SD card has errors and can repair it; it finds around 24,000 x 32KB bad clusters. Once CHKDSK has repaired the card the used space equates roughly to the size of the files on the card and ffree() calls on my target platform complete in the usual 70mSec time period.
BTW when I make a copy of the corrupted card using HDDGuru’s HDDRawCopy1.10 the copy has the same 700+MB of wasted (corrupted) space just like the original but when I insert the card into my target platform calls to ffree() complete in the normal 70mSec time frame?
Specifically I would help with 1. Detecting the SD card corruption in my target platform, everything appears to work fine apart from the very slow ffree(); unfortunately fanalyse() and fcheck() aren’t available to me because it is a FAT file system. 2. Understanding why a low level copy of the card doesn’t suffer from the very slow ffree() response. 3. Ultimately stopping the corruption from occurring in the first place.
Many thanks in advance for any assistance/advice you can give me.
Paul
Removing power would corrupt the card if files were not closed prior.
That's totally expectable behaviour of a FAT file system, particularly if it's running on flash storage.
FAT is not, and never has been, robust against surprising power loss or hard reset events. In the old days, when PCs ran DOS and had real power switches and reset buttons, nobody would be surprised by CHKDSK finding problems after either of those were used without preparation. "Close all programs properly before power-down!" was a routine experienced MS-DOS users had already got used to, before Windows 3.1 and '95 drilled into everyone with force. On development machines, where programs would crash and require a hard reboot more frequently, it was customary to run CHKDSK on every boot.
On top of that, using FAT on flash media would be essentially impossible unless there's a sector remapping mechanism for wear levelling sitting between the FAT file system driver and the raw medium. Which is why "raw" flash file systems should never use FAT.
As it is, the necessarily huge frequency of updates to the FATs triggered by open, continuously growing files will stress the wear-levelling mechanism quite a bit. Surprising power-loss will cause failure to update either the FATs themselves, or the re-mapping tables, to non-volatile storage. If you utilize the full write speed, the on-medium data will practically never be in a state fit for a clean shutdown, so every power loss will leave them corrupted.
I had a similar problem
I use a stm32F2 driving the sdio to the SD card (all Keil software from early versions of 4 to 4.74)
I was collecting can data and a few sensors data and logging to the SD card.
I stopped the SD card corrupting by making sure I only wrote to the SD card with whole sectors eg when 512+ bytes have been written to ram buffer then commit 512 to the SD card
why ? it reduced the number of sector reads and writes e.g. root directory entry, FAT, Cluster chain, data clusters
when the system powers down I write everything in the ram buffer.
Another thing to be aware of I had an issue where the number of files in the folder slowed down the file access time e.g. no problems with less than 30 files but it started taking much longer with about 60 files and if you had more than say 150 the system would crash as the file operations would take so long the system would stop task switching meaning the watchdog was kicked in time. ( there was some good "while" loops in the sdio driver that the system would seem to crash in.
I worked out by store the data in MxxY20xx ( month year named folder so only having a max of 31 files per folder) it stopped the FS slowing down
its in the forum threads so just search Danny Curran and it will list the file issues I had and how I got round them
Terrible software ! People pay for this :-( !
But in the good old days, when a PC had max 640 kB memory, people had to remember to split their log data into subdirectories to avoid slowdowns. And most embedded devices has much less RAM available.
Next thing is that flash media is bad at emulating a hard disk - traditional file systems for HDD/FDD are optimized for media with completely different behavior.
EEPROM is often a better choice for small to medium data collection. And for large-scale data collection, it really matters to try to align data and match block writes with flash block sizes - so maybe performing 128 kB writes with 128 kB align or even 1 MB writes with 1 MB align.
The FAT file system works badly on embedded equipment running a full Linux and writing to SD cards. It isn't likely to work better on much smaller embedded devices running Keil's small-RAM adaptation.
Optimum is to not require PC compatibility for the SD card data, and instead optimize for stability/performance. Then either let the embedded device perform some translation and exporting the data over USB, or maybe let the PC see a single huge file on the memory card and use a custom PC application that extracts the data inside that container.
In the end, Keil can't do miracles with FAT.