<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet type="text/xsl" href="https://community.arm.com/utility/feedstylesheets/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:wfw="http://wellformedweb.org/CommentAPI/"><channel><title>FS Flash STM SDIO - slowing down with number of files</title><link>https://community.arm.com/developer/tools-software/tools/f/keil-forum/27616/fs-flash-stm-sdio---slowing-down-with-number-of-files</link><description> 
Thought I would ask the question to see if any one else has come
accross this problem before 

 
STM32F103 @ 72Mhz 
using 4.21 RTX and FS FLASH 
using the Keil SDIO driver (4 bit mode) for this device
SDIO_stm32F103.c 
using the latest version of the</description><dc:language>en-US</dc:language><generator>Telligent Community 10</generator><item><title>RE: FS Flash STM SDIO - slowing down with number of files</title><link>https://community.arm.com/thread/127090?ContentTypeID=1</link><pubDate>Fri, 16 Sep 2011 08:04:28 GMT</pubDate><guid isPermaLink="false">dd9e70c8-6d3c-4c71-b136-2456382a7b5c:a1da3768-9739-474e-915e-e30f28d011c6</guid><dc:creator>Dan Curran</dc:creator><description>&lt;p&gt;&lt;p&gt;
Ive tried various options on writing files to different
directorys, name, lengths etc&lt;br /&gt;
and every now and then it falls over ( the rtx stops toggling the
watchdog and the hardware resets) but on restart does not appear to
get hung like it used to trying to open a file&lt;br /&gt;
(file count is around 1200 files in 44 directories)&lt;br /&gt;
being pragmatic hte solution I have will do for now&lt;/p&gt;

&lt;p&gt;
Just been informed there is a update to the tool chain because of
the Filesystem etc - the links I had to the source for the FS does
not seem to be active with a new update so lets find out whats new
!&lt;/p&gt;
&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: FS Flash STM SDIO - slowing down with number of files</title><link>https://community.arm.com/thread/119462?ContentTypeID=1</link><pubDate>Wed, 14 Sep 2011 09:23:28 GMT</pubDate><guid isPermaLink="false">dd9e70c8-6d3c-4c71-b136-2456382a7b5c:f51398ca-aba7-448c-bf85-43439243cb56</guid><dc:creator>ImPer Westermark</dc:creator><description>&lt;p&gt;&lt;p&gt;
Yes, but consider:&lt;/p&gt;

&lt;p&gt;
/year/mon/mday/&amp;lt;time&amp;gt;.log&lt;/p&gt;

&lt;p&gt;
The issue shouldn&amp;#39;t be the number of files in the file system, but
the number of files in a specific directory (or potentially,
depending on implementation, the total number of files in current
directory and all directories above) and how much file name data each
file produces.&lt;/p&gt;

&lt;p&gt;
A FAT file system has a fixed-size directory entry. So VFAT is a
way to add many &amp;quot;deleted&amp;quot; file entries to the same directory, where
these &amp;quot;deleted&amp;quot; file entries contains the additional text of long
file names. So a directory listing must walk through the data to find
and combine multiple directory entries just to recreate one &amp;quot;real&amp;quot;
directory entry.&lt;/p&gt;

&lt;p&gt;
And if you have code that wants to list all files in alphabetical
order, and that code is implemented without buffering, but &amp;quot;find
smallest name larger than current&amp;quot;, then every single new file name
listed is retrieved by a full scan of all directory entries.&lt;/p&gt;

&lt;p&gt;
And if all directory entries are too much for one sector of the
file system (or maybe one cluster, depending on what caching level
the file system code is using), then the code needs to store multiple
sectors (or clusters) in RAM when doing this. If the configuration
then supports only one or two sectors (or clusters) in RAM, then you
get a huge trashing where the same 5 or 10 sectors are reread a large
number of times when scanning the directory. A PC with a stupid
implementation (MS-DOS) can still manage large directories just
because it has many buffers available. And Win95 and forward are not
limited by a few kB for a small command.com and can use more modern
sort algorithms to read in lots of files and rearange without any n^2
(or worse) algorithm involved.&lt;/p&gt;

&lt;p&gt;
So maybe you can increase the number of buffers supported. And if
you can avoid long file names, you will reduce amount of data that
must be processed for a directory scan. And a deeper hierarchical
design will reduce the number of files you may produce at the leaf
level.&lt;/p&gt;

&lt;p&gt;
I haven&amp;#39;t worked with the Keil FS implementations, so I can&amp;#39;t
really help with specifics, but their implementation has to be
affected by the same limitations as &amp;quot;all other&amp;quot; FS implementations.
And many files in a directory or very long files are features that
does result in troubles for RAM-limited implementations.&lt;/p&gt;
&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: FS Flash STM SDIO - slowing down with number of files</title><link>https://community.arm.com/thread/105268?ContentTypeID=1</link><pubDate>Wed, 14 Sep 2011 04:59:40 GMT</pubDate><guid isPermaLink="false">dd9e70c8-6d3c-4c71-b136-2456382a7b5c:3254feec-5fa8-478a-bbb4-364aef04d119</guid><dc:creator>Dan Curran</dc:creator><description>&lt;p&gt;&lt;p&gt;
forgot to add there is 100Kb of data per day.&lt;/p&gt;
&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: FS Flash STM SDIO - slowing down with number of files</title><link>https://community.arm.com/thread/79578?ContentTypeID=1</link><pubDate>Wed, 14 Sep 2011 04:54:53 GMT</pubDate><guid isPermaLink="false">dd9e70c8-6d3c-4c71-b136-2456382a7b5c:f335987d-6025-4f21-a00d-59aef688a1d5</guid><dc:creator>Dan Curran</dc:creator><description>&lt;p&gt;&lt;p&gt;
as a quick test I have tried&lt;/p&gt;

&lt;p&gt;
opening a diriectory based on month and year then opening just one
file a day and appending all data to it.&lt;/p&gt;

&lt;p&gt;
With one new file per day eg 31 files at most per directory it was
able to write the equivelent of 3 years worth with it seems like no
degradation of the system.&lt;/p&gt;

&lt;p&gt;
This seems to behave okay but it does require changes to the data
written in the header and pc software to analyse the logged data.
Plus at the moment the systems settings take about 1.3K so if the
system starts 30 times a day there is a lot of data being repeated in
the file.&lt;/p&gt;

&lt;p&gt;
There is a reason for the long file names (e.g. admin and being
man readable) but if it cant do the task required becasue of the
problems it will have to change.&lt;/p&gt;
&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item><item><title>RE: FS Flash STM SDIO - slowing down with number of files</title><link>https://community.arm.com/thread/66917?ContentTypeID=1</link><pubDate>Wed, 14 Sep 2011 04:12:36 GMT</pubDate><guid isPermaLink="false">dd9e70c8-6d3c-4c71-b136-2456382a7b5c:824bd3d3-49f4-4925-b235-3ad1aa548281</guid><dc:creator>ImPer Westermark</dc:creator><description>&lt;p&gt;&lt;p&gt;
My guess is that it is caused by both the long file names, and the
number of sectors that the code is caching in memory. So when
browsing the directory, the code performs a large number of walks
through the directory information (and the directory is basically a
file storing fixed-size records and allocating n records at a
time.&lt;/p&gt;

&lt;p&gt;
When operating on the files, you probably get a truly huge number
of sector reads, to constantly read in the start, beginning and end
of the directory into swapping RAM buffers.&lt;/p&gt;

&lt;p&gt;
What happens if you use the epoc time stamp as file name instead,
so you get a file name that fits in a 8.3 file name? That should
speed up the operation, but how much?&lt;/p&gt;

&lt;p&gt;
When handling lots of files, you should consider regularly create
new directories.&lt;/p&gt;
&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;</description></item></channel></rss>