This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Accessing files larger than available RAM on core with MMU

Hi,

I'm working with an ARM926EJ-S-based processor that has some external RAM and an SD card. There's a file several times larger than my available RAM stored on the SD card. An existing library of code would like to be able to read the file's contents as if it were all stored in memory. Without needing to have the library rewritten, is there a way to make it appear that the file is loaded into memory without the entire file actually being loaded into memory?

An approach I was considering was to use the data abort exception:
1) Set up a page table with a virtual address to represent the file. Initialize related pages to disallow read access.
2) Program attempts to read from somewhere in the memory space for the file; data abort exception thrown.
3) Data abort exception checks to see what type of exception occurred and to what address. If in the file memory space, continue as follows. Otherwise, valid exception.
4) Exception handler loads the page, or multiple pages, of content from the SD card into an allocated space in RAM.
5) It then updates the page table accordingly (also disabling any pages that were using the allocated space previously) and flushes the TLB.
6) It goes back to the point in code that was trying to access the was-invalid-but-is-now-valid memory (at R14_abt-8) and continues operation.

Is this feasible? I understand the performance hit of having to load content from the SD card as needed, but it saves having to get the library rewritten.

Is there a different approach?

Thanks,

Ken

  • Doesn't sound very embedded friendly, does it expect a 3 GHz dual core too?

    You'd be much better to refactor the code with a small memory foot print in mind, perhaps by someone who doesn't assume the infinite resources of a modern desktop PC.

    If the library has a reasonable abstraction layer, perhaps you can load and cache portions of the file. Win32's MapViewOfFile had a 256 MB window as I recall, and presumably you're talking about a file less than 4GB in size?

  • There were some previously crunched simulations that the library's processing time shouldn't be too bad given low wait-state access to memory. Since that time, however, the library had moved on to bigger and better processors that could handle swap files and things of that sort so it's been developed further in the direction of near-infinite space. It's now getting dragged back to a smaller (both in memory and processing capability) platform that's cost prohibitive.

    I expected the library to need to be re-written with the smaller memory footprint in mind and, as you suggest, would write code to handle the loading and caching at the library calling level. From my own understanding of the library, specific content of the file is loaded and used more often then others given operation mode, so having an intelligent caching scheme is in my favor.

    However, I decided to see if there was some way to not need the library changed and started to consider the idea of using the MMU and the data abort exception as the means to control when to load data from the card.

    I've got to write the code to do the loading and caching anyway, just wondering if I could save the library developer time with this method.

    The file is upwards of 256MB. System RAM is 8MB.