This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

fopen() takes too long to create file

We are using LPC1788 with the RL-ARM.To summarize the problem,

a]We want to store Report files in a single folder as soon as a test is complete
b]Upto 50 reports or so, the time taken to create a new file is almost same
c]After that the amount of time keeps on increasing with every new file and it takes close to 8 seconds when i reach 100 files or so.

What could be the possible reason?Are there any limitations to the number of files that can be stored in a folder?

Please help..

Parents
  • I haven't used the Keil file system layer so I haven't checked the configuration options. If it is using a unified buffer system or if uses separate memory for directory information and file data buffers. And if it separates buffering of FAT sectors and buffering of raw flash blocks.

    But the amount of buffers does affect performance. If the library can't keep the full directory in memory then it needs to flush one block just to reread a different block. Not too expensive when reading data, but as you are adding files that means that the file system must constantly rewrite the directory flash data to release the required RAM to return back and read in the start of the directory again.

    One FAT directory entry is 32 bytes so it takes 1024 of RAM per 32 directory entries.

    It's up to the file system implementation how it will handle overflow - i.e. directories that are too large to fit in RAM but it isn't uncommon with very steep runtime increases. Especially if the underlying flash blocks are larger than the simulated FAT sectors.

    Even on full-size file server machines you want to have enough RAM to be able to keep all directoriy entries of all directories from root and up to the current directory in RAM or the performance times will drop, and each individual directory should be limited to avoid too long directory entry scan times.

    Check what configuration options you have to allocate more RAM to the file system.

Reply
  • I haven't used the Keil file system layer so I haven't checked the configuration options. If it is using a unified buffer system or if uses separate memory for directory information and file data buffers. And if it separates buffering of FAT sectors and buffering of raw flash blocks.

    But the amount of buffers does affect performance. If the library can't keep the full directory in memory then it needs to flush one block just to reread a different block. Not too expensive when reading data, but as you are adding files that means that the file system must constantly rewrite the directory flash data to release the required RAM to return back and read in the start of the directory again.

    One FAT directory entry is 32 bytes so it takes 1024 of RAM per 32 directory entries.

    It's up to the file system implementation how it will handle overflow - i.e. directories that are too large to fit in RAM but it isn't uncommon with very steep runtime increases. Especially if the underlying flash blocks are larger than the simulated FAT sectors.

    Even on full-size file server machines you want to have enough RAM to be able to keep all directoriy entries of all directories from root and up to the current directory in RAM or the performance times will drop, and each individual directory should be limited to avoid too long directory entry scan times.

    Check what configuration options you have to allocate more RAM to the file system.

Children
No data