I am using LPC1788 with Keil MDK4 along with RLFlash FS[4GB External NAND] & RTX Kernel. In my application i use the NAND Flash to store TEST Reports & activity logs.
Each report is stored in individual file & there are 500 reports in each folder with a max count of 5000 reports[10 folders].data is written only once in the report file[open->write->close]
In the activity log file, the data is appended.In my case, i have appended data to the file around 10000 times[~50 bytes per append].Every time after appending i close the file & then reopen the next time.
Now, i am unable to access any files in the NAND.When i read the directory, the files can be seen, but when i try to open it, the system hangs.infact, the debugger that i am using also hangs[ULINK2] & i have to reset it as well.
I have read somewhere that it is not a good practice to append small data to files again & again..
Please guide..!
A followup query:
What is the maximum file size supported, is there any limitation?
Please refer below link : www.keil.com/.../fat_fs.html
Punit asked: "What should be the ideal configuration for Page & block size?"
The manufacturer should know this, and document it accurately. If suspect any discrepancy, contact the manufacturer. What 4GB NAND device are you using?
I think you meant to report page size as 4096 bytes, instead of 4096 blocks.
For NAND flash, from big to small, I think it is: block/cluster ==>> page ==>> sector
It is confusing, because different types of memory devices can re-use the same names to represent different ideas.
Without knowing the NAND device, we can only guess what the configuration should be. I think multiplying the (# of blocks) * (# of bytes/page) * (#of pages/block) should result in the max memory for the device (in ***bytes***, remember 4GB ~= 4.295 billion bytes, not 4,000,000,000 bytes), so that may be a sanity check for different configurations. You may want to also experiment with altering the (# of pages / block) in the configuration, because it sounds like that may be the part of the issue, as well.
Ex. 4096 blocks * 4096 bytes/page * 256 pages/block = 4GB
The original configuration would be half a gigabyte, (interestingly close to the size of the file that was going to be opened, right?). This setup doesn't match the total memory available, 4 GB.
Well, I must correct the last piece of math in the last post. I guess the original file was 500KB, that was going to be appended (not 500MB, like I thought). Was the file system already filled (max # of reports beyond 2.5 GB) when the fopen() hung?
One would think an fopen() would have failed earlier, at 500MB, but it still sounds like a bad configuration used with the file system.
Hello Zack,
Thanks for the calculations above.But my NAND Flash size is 4 Giga bits.This comes to approximately 500 Mega Bytes.
But i understand the calculations above.
I will try different configurations and repost.
To fill the memory, I am creating a test program that writes data to log files every second.
Will update with results!
Thanks, Punit
Hello,
Is there any way that i can view the FAT Table of my drive?
The file allocation table is stored at the beginning of the memory device, so in theory you could do a ReadSect() or PageRead(): http://www.keil.com/support/man/docs/rlarm/rlarm_fat_readsect.htm http://www.keil.com/support/man/docs/rlarm/rlarm_nand_pageread.htm at starting addresses. It probably won't be very human-readable, but it is the allocation table.
If you just want to look at all the files in a directory, use ffind() to search for everything: http://www.keil.com/support/man/docs/rlarm/rlarm_ffind.htm
Where the results of the tests OK?
If you have already resolved the issue, I recommend starting up a different topic in another thread. Is this part of the tests?
Yes, my query is related to the same issue..
I wanted to see if i am able to extract any information from FAT.
I have got mixed results..
1]Creating multiple blank files seems to work ok.(Although the time to create files increases gradually as the memory fills).Created 5000-6000 each in 3 separate folders over a period of 3 days.
2]When i add the logging part in above test[appending data to the file and closing every time], File creation stops[fopen returns NULL] & also the logs are corrupted.
My query:
1] Can the program stack/heap size have any effect in the above case?
Can you please explain the meaning of below
"When the file content is modified, the old file content is invalidated and a new memory block is allocated. The Flash Block is erased when all the data stored in the Flash Block have been invalidated."
To add to my above previous reply, i ran a test to append data to few [6 files] at 5 seconds interval.Data was 256 bytes long.
The test ran for around 40000 append cycles after which all the files became blank..
Any suggestions,?
Punit
Avoid reading from the EFS (embedded file system) section, because it describes the process for NOR flash. There are two file systems: EFS and FAT, contained inside FlashFS. By "all the data", the manual means all file fragments stored in the flash block.
If you are using NAND flash with the early version, FlashFS, look at the NAND or *FAT* filesystem documentation.
Does fwrite() or fclose() return any errors, before the issue occurs?
As it was an overnight test, i could not check the return value..i will retest & try to keep a track of return values.
Can u please tell what is the size of a cluster for FAT32 Format.In my case, the cluster size is 16KB.But as per specifications i think it should be 4KB.Am i misunderstanding something here?
Hello Punit,
FAT32 does not have a fixed cluster size. Several cluster sizes are accepted. The cluster size can depend on the memory device and its storage size. What type of NAND flash is the project using?
If the specification/datasheet for the NAND flash says 4KB, then use 4 kilobytes.
Thanks, Zack
We have just migrated to the Keil mdk version 5.Already migrated version 4 projects to version 5.
1]Currently we use RL-FlaashFs.But from version 5 onwards the "File system Component" is being used.Is it worth changing to this new system?Can it help resolve the above issues
2]Also,there is very heavy focus on CMSIS-RTOS.We use Keil RTX at present? Should we change that as well?
1) We haven't really narrowed down the cause, so it is hard to know if a change in middleware would resolve the issue. In general, upgrading does give the file system the best chance for it to work well. There have been added features and bug fixes, over the years, just like most software layers. It may help, if you are really stuck. The most important reason to upgrade is that the File System Component is actively maintained, and you'd have access to Keil tech support, who can help and try to resolve the issue with you, by answering questions about the tools, in a better format than a forum. You'd also get access to the Event Recorder to get more detailed feedback during debug, if using a recent version of the file system (Middleware v7.3.0+). Feel free to mention this forum to the support engineer, so they know what has already been covered.
2) The File System Component requires a CMSIS RTOS. We also have a migration guide, if you are moving from the earlier RTX to CMSIS RTOSv1-compliant RTX: http://www.keil.com/appnotes/files/apnt_264.pdf