I am using LPC1788 with Keil MDK4 along with RLFlash FS[4GB External NAND] & RTX Kernel. In my application i use the NAND Flash to store TEST Reports & activity logs.
Each report is stored in individual file & there are 500 reports in each folder with a max count of 5000 reports[10 folders].data is written only once in the report file[open->write->close]
In the activity log file, the data is appended.In my case, i have appended data to the file around 10000 times[~50 bytes per append].Every time after appending i close the file & then reopen the next time.
Now, i am unable to access any files in the NAND.When i read the directory, the files can be seen, but when i try to open it, the system hangs.infact, the debugger that i am using also hangs[ULINK2] & i have to reset it as well.
I have read somewhere that it is not a good practice to append small data to files again & again..
Please guide..!
Hello Zack,
I open only a single file at a time for reading from or writing into..But still i will try by increasing the mutex count..
Another thing that i observed is that after any new file is created, the free space in NAND reduces by 16384 bytes.I check this with ffree() at power on.However,with FINFO the file size of the newly created file is only 925 bytes.
Is there any mistake in File system or NAND Flash configuration?
Thanks, Punit
Hello Punit,
If it is only one file, then I wouldn't worry about mutexes. The defaults should be more than enough.
Can you check the return values of each fopen() and fclose()? Was the fclose() before the hanging fopen() successful?
I think 16384 is the block size, 8 pages, each page of 2048 bytes. The File_Config.c could confirm this configuration.
The file system counts the available blocks (clusters) for the file system remaining size. This calculation returns much faster, so the calculation of available space isn't as granular (per file or lack thereof) as one might think. I don't think there's a mistake, here. The bytes are just used as a unit of measurement, rather than granularity.
For example, if you added another 925 bytes, ffree() should still return the same value, with 16384 (one block) less than an empty filesystem, right? It would write to remaining free sectors in the same block.
If the log files are not important, you may try another fformat() with the option LOW_EB: http://www.keil.com/support/man/docs/rlarm/rlarm_fformat.htm before retesting the file system.
Thanks, Zack
I was playing with different configurations in the FIle_Config.c
My original configuration is:2048+64(Page Size),64 Pages(Block Size),4096 blocks(device size).
However, as per manufacturer datasheet, the Pages is 4096 blocks & total blocks are 2048.When I configure these settings and format the machine,strange things happen & fle system is not working as expected(Missing files,corrupted files etc..)
Next, i tried to reduce the number of blocks to 2048, keeping every thing else same.Now when i use ffree() after formatting the drive it returns close to 2Gb.And now every new file occupies 8192 bytes of space instead of 16384.
What should be the ideal configuration for Page & block size?
And yes, the file size does not increase until the data in it goes above 16384 bytes..
A followup query:
What is the maximum file size supported, is there any limitation?
Please refer below link : www.keil.com/.../fat_fs.html
Punit asked: "What should be the ideal configuration for Page & block size?"
The manufacturer should know this, and document it accurately. If suspect any discrepancy, contact the manufacturer. What 4GB NAND device are you using?
I think you meant to report page size as 4096 bytes, instead of 4096 blocks.
For NAND flash, from big to small, I think it is: block/cluster ==>> page ==>> sector
It is confusing, because different types of memory devices can re-use the same names to represent different ideas.
Without knowing the NAND device, we can only guess what the configuration should be. I think multiplying the (# of blocks) * (# of bytes/page) * (#of pages/block) should result in the max memory for the device (in ***bytes***, remember 4GB ~= 4.295 billion bytes, not 4,000,000,000 bytes), so that may be a sanity check for different configurations. You may want to also experiment with altering the (# of pages / block) in the configuration, because it sounds like that may be the part of the issue, as well.
Ex. 4096 blocks * 4096 bytes/page * 256 pages/block = 4GB
The original configuration would be half a gigabyte, (interestingly close to the size of the file that was going to be opened, right?). This setup doesn't match the total memory available, 4 GB.
Well, I must correct the last piece of math in the last post. I guess the original file was 500KB, that was going to be appended (not 500MB, like I thought). Was the file system already filled (max # of reports beyond 2.5 GB) when the fopen() hung?
One would think an fopen() would have failed earlier, at 500MB, but it still sounds like a bad configuration used with the file system.
Thanks for the calculations above.But my NAND Flash size is 4 Giga bits.This comes to approximately 500 Mega Bytes.
But i understand the calculations above.
I will try different configurations and repost.
To fill the memory, I am creating a test program that writes data to log files every second.
Will update with results!
Hello,
Is there any way that i can view the FAT Table of my drive?
The file allocation table is stored at the beginning of the memory device, so in theory you could do a ReadSect() or PageRead(): http://www.keil.com/support/man/docs/rlarm/rlarm_fat_readsect.htm http://www.keil.com/support/man/docs/rlarm/rlarm_nand_pageread.htm at starting addresses. It probably won't be very human-readable, but it is the allocation table.
If you just want to look at all the files in a directory, use ffind() to search for everything: http://www.keil.com/support/man/docs/rlarm/rlarm_ffind.htm
Where the results of the tests OK?
If you have already resolved the issue, I recommend starting up a different topic in another thread. Is this part of the tests?
Yes, my query is related to the same issue..
I wanted to see if i am able to extract any information from FAT.
I have got mixed results..
1]Creating multiple blank files seems to work ok.(Although the time to create files increases gradually as the memory fills).Created 5000-6000 each in 3 separate folders over a period of 3 days.
2]When i add the logging part in above test[appending data to the file and closing every time], File creation stops[fopen returns NULL] & also the logs are corrupted.
My query:
1] Can the program stack/heap size have any effect in the above case?
Can you please explain the meaning of below
"When the file content is modified, the old file content is invalidated and a new memory block is allocated. The Flash Block is erased when all the data stored in the Flash Block have been invalidated."
To add to my above previous reply, i ran a test to append data to few [6 files] at 5 seconds interval.Data was 256 bytes long.
The test ran for around 40000 append cycles after which all the files became blank..
Any suggestions,?
Punit
Avoid reading from the EFS (embedded file system) section, because it describes the process for NOR flash. There are two file systems: EFS and FAT, contained inside FlashFS. By "all the data", the manual means all file fragments stored in the flash block.
If you are using NAND flash with the early version, FlashFS, look at the NAND or *FAT* filesystem documentation.
Does fwrite() or fclose() return any errors, before the issue occurs?
As it was an overnight test, i could not check the return value..i will retest & try to keep a track of return values.
Can u please tell what is the size of a cluster for FAT32 Format.In my case, the cluster size is 16KB.But as per specifications i think it should be 4KB.Am i misunderstanding something here?
FAT32 does not have a fixed cluster size. Several cluster sizes are accepted. The cluster size can depend on the memory device and its storage size. What type of NAND flash is the project using?
If the specification/datasheet for the NAND flash says 4KB, then use 4 kilobytes.
We have just migrated to the Keil mdk version 5.Already migrated version 4 projects to version 5.
1]Currently we use RL-FlaashFs.But from version 5 onwards the "File system Component" is being used.Is it worth changing to this new system?Can it help resolve the above issues
2]Also,there is very heavy focus on CMSIS-RTOS.We use Keil RTX at present? Should we change that as well?