This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

looking for better solution to struct offset

NOTE: C51

Keil uses 16 bit pointers and I have a file which is larger than 64k.

the file is a bunch of structures (all pointed to) that I acces as structures

This creates pure havoc when a structure crosses a 4k boundary.

example: at 0fffe
U16 shortvalue
U8 charvalue
if you access charvalue as str_pointer->charvalue, you get the contents of location 0, not 10000.

these values are NOT accessed by 'regular' code, but by U8 ReadFlash(U32 address)

any suggestions. The only one I have come up with is rather 'unpleasnt' i is to use #defines instead of structures
as in above example
#define shortvalue 0
#define charvalue 2

this makes [str_pointer->charvalue] into [str_pointer + charvalue] which the compiler treats 'correctly' as a 32 bit addition.

Erik

PS I have begged and begged the creator of these files to 'gap the files' where a 64k cross happens but get a "no way Jose"

Parents
  • As a sidebar, I do net believe that "C data structures are meant to be" limited in size or dependent on location in memory.

    But of course they are! They're limited in size at least by the addressable memory range of the processor. And they're dependent on location in memory inasmuch as they have to be located in memory. Your "file" isn't. It's on mass storage.

    If you're writing a program on a PC, you wouldn't expect to map the entire filesystem to a C datastructure, would you?

Reply
  • As a sidebar, I do net believe that "C data structures are meant to be" limited in size or dependent on location in memory.

    But of course they are! They're limited in size at least by the addressable memory range of the processor. And they're dependent on location in memory inasmuch as they have to be located in memory. Your "file" isn't. It's on mass storage.

    If you're writing a program on a PC, you wouldn't expect to map the entire filesystem to a C datastructure, would you?

Children
  • What kind of class would assign such a lame task or question?

    Well, almost. Memory-mapped files are quite often used for high-end solutions. And database servers are the main winners from the availability of 64-bit machines, since they need to be able to map structures larger than 4GB.

  • LOL

    Big oops. The quoted text was expected to be If you're writing a program on a PC, you wouldn't expect to map the entire filesystem to a C datastructure, would you?

  • But of course they are! They're limited in size at least by the addressable memory range of the processor.
    is the fact that I have an "addressable range" of 2MB for my flash chip invalid?

    And they're dependent on location in memory inasmuch as they have to be located in memory. Your "file" isn't. It's on mass storage.
    It is NOT "on mass storage", it is loaded to a flash chip addressed as XDATA.

    Erik

  • Turbo C implemented far and huge pointers, where the far pointer was a normal native pointer that could address a single 64kB block of data within a segment anywhere in memory.

    The huge pointer on the other hand was always normalized (having an offset in the [0..15] range) which allowed it to access structures of any size that could be fitted in memory.

    Keil really should have supplied two kinds of pointers, allowing the user to select the slow but general type, or the fast variant with the 64kB window limit.

  • is the fact that I have an "addressable range" of 2MB for my flash chip invalid?

    As long as that flash is not addressable by the MCU (i.e. it's not HDATA), it is invalid. Just because a rack-full of harddrives can address 500 TB doesn't mean a C program can handle C structs that large.

    It is NOT "on mass storage", it is loaded to a flash chip addressed as XDATA.

    2MB don't fit into 64KB, so that's obviously a misrepresentation of the facts.

  • The huge pointer on the other hand was always normalized (having an offset in the [0..15] range) which allowed it to access structures of any size that could be fitted in memory.

    Of any size? Yes.
    Of arbitrary position? No.

    Even Turbo C's huge pointers don't support objects straddling a 64 KiB boundary. Been there, done that.

  • Memory-mapped files are quite often used

    Files, yes. But not entire filesystems, precisely because they're larger than addressable memory.

    64-bit machines, since they need to be able to map structures larger than 4GB.

    Which was my point, of course: a C program will have its own limits on object size, which have nothing to do with that of mass storage.

  • 2MB don't fit into 64KB, so that's obviously a misrepresentation of the facts.
    we are getting a bit off track here, the issue is not that (Keil) C should support it, but how to fetch it.

    Anyhow as to "obviously a misrepresentation of the facts" Keil DOES support "data banking" which, in it's way, makes the data address reange 24 bits.

    Again I am, in no form or fashion, stating Keil should process structures >64k and/or crossing page boundaries, just trying to find the 'safest' or, if you will, 'best' solution.

    Erik

  • Anyhow as to "obviously a misrepresentation of the facts" Keil DOES support "data banking" which, in it's way, makes the data address reange 24 bits.

    Yes, but that would be called HDATA, not XDATA. And the way you pointed out that "of course" you accessed that data via an accessor function pretty much proved that you're not using that, so it's beside the point.

    There's structurally no difference between an accessor function like that, and generic file I/O via fsetpos()/fread(). For all practical intents and purposes, that data is on mass storage.

    just trying to find the 'safest' or, if you will, 'best' solution.

    There is not really any "safe" way, because you have to leave the region where the safety features of the language can offer any help. There's not much to be improved over the obvious #define'd offsets. As a bonus, that method allows you to break the strict relation between the data layout in the file, and that of the C structures. E.g. even if the other side of the storage model needs padding, you don't have to bother the C51 with it.

  • That depends a bit on your view. The underlying pointer (being a hardware limitation) can't store an offset larger than 65535, but since Borland always kept the pointer normalised, you could create arrrays way larger than 64kB and walk around in it without fighting any 64k range - even if individual elements of the array straddled any 64k boundary.

    First off, you had to dynamically allocate the memory if the element size didn't divide 65536 evenly. With a statically allocated array, no element was allowed to straddle any 64kB block.

    But yes, the Borland huge pointer definitely supported data straddling the 64k limits. Just to make sure, I did a quick verify now :)