NOTE: C51
Keil uses 16 bit pointers and I have a file which is larger than 64k.
the file is a bunch of structures (all pointed to) that I acces as structures
This creates pure havoc when a structure crosses a 4k boundary.
example: at 0fffe U16 shortvalue U8 charvalue if you access charvalue as str_pointer->charvalue, you get the contents of location 0, not 10000.
these values are NOT accessed by 'regular' code, but by U8 ReadFlash(U32 address)
any suggestions. The only one I have come up with is rather 'unpleasnt' i is to use #defines instead of structures as in above example #define shortvalue 0 #define charvalue 2
this makes [str_pointer->charvalue] into [str_pointer + charvalue] which the compiler treats 'correctly' as a 32 bit addition.
Erik
PS I have begged and begged the creator of these files to 'gap the files' where a 64k cross happens but get a "no way Jose"
Turbo C implemented far and huge pointers, where the far pointer was a normal native pointer that could address a single 64kB block of data within a segment anywhere in memory.
The huge pointer on the other hand was always normalized (having an offset in the [0..15] range) which allowed it to access structures of any size that could be fitted in memory.
Keil really should have supplied two kinds of pointers, allowing the user to select the slow but general type, or the fast variant with the 64kB window limit.
Of any size? Yes. Of arbitrary position? No.
Even Turbo C's huge pointers don't support objects straddling a 64 KiB boundary. Been there, done that.
That depends a bit on your view. The underlying pointer (being a hardware limitation) can't store an offset larger than 65535, but since Borland always kept the pointer normalised, you could create arrrays way larger than 64kB and walk around in it without fighting any 64k range - even if individual elements of the array straddled any 64k boundary.
First off, you had to dynamically allocate the memory if the element size didn't divide 65536 evenly. With a statically allocated array, no element was allowed to straddle any 64kB block.
But yes, the Borland huge pointer definitely supported data straddling the 64k limits. Just to make sure, I did a quick verify now :)