At this time when the two of you are active here, can I get an answer to the following which I have sougtht an answer to for some time both through support and this forum. Is there - or will there be - a means of allowing a struct to cross a 64k boundary. The code generated by the compiler does 16 bit math so members in the second page are erroneously accessed. This is for address calculation, not actual access since my system has a "flash page" port and a "read flash" function that set that port. Note" I am NOT talking of "placing in any 64k page", I AM talking about accessing a member of a struct that happen to be in the following 64k page. PLEASE no words about banks, I am not using it since 99.99% of the run time my program is happy in 64k code and data. Erik
There is indeed no "hdata" keyword in C51, even though that's a linker class. Try "far". You might also have a use for the FVAR and FARRAY macros in absacc.h. And though I hate to possibly set off another rage, I feel obliged to mention that XBANKING.A51 holds the code that allows you to customize C51 to your particular hardware that controls the address lines for a data space larger than 64K. I don't care whether you call it (data) banking or paging; it's the same mechanism, and the same code. The 8051 has many memory address spaces, and more than one of them can be banked, or paged, or segmented; whatever term you prefer. L51_BANK.A51 has the code banking routines, along with data banking routines. XBANKING is just the xdata address extension library. Even if your code is not banked, you still need to modify one or the other of these files to get the data bank routines. Otherwise, the ability to calculate large addresses becomes meaningless. It's also useful to peek at XBANKING.A51 because this file has comments which detail the 3-byte pointer format, which includes generic and far pointers. It's in the manual, too, but for some reason I always found the description in the code a bit more informative. If you declare an item far and take its address, C51 is going to generate a pointer in this format. In particular, note that the tag byte starts at 1 for xdata. Address 010000H is the first byte of xdata with a generic/far pointer. (Tag byte 0 denotes a data pointer. Unfortunate bit of history there.) You can, of course, roll your own code to do the data banking and insert the proper calls by hand, leaving the compiler completely ignorant of your hardware scheme. In that case, you might just as easily declare all your hand-rolled "far pointers" as U32s, since you'll be doing all the arithmetic yourself. You won't get much help from the compiler for doing structure and field assignments, though, since it's been denied any knowledge about the extended addressing.
which includes generic and far pointers Drew, I appreciate you attempts to help, but please understand I have ZERO, NONE, NADA problems with long pointers The problems are with structs that cross a 64k boundary. The value is added to the pointer using 16 bit arithmetic which, of course mislocates the higher entries in the struct when the struct cross a 64k (OK, if you insist, you may say "bank") boundary. e.g. locate a struct at, say, 0x1fffc, the pointer is fine, but members beyond offset 3 are not accessaible. obliged to mention that XBANKING.A51 holds the code that allows you to customize C51 to your particular hardware that controls the address lines for a data space larger than 64K. I don't care whether you call it (data) banking or paging; it's the same mechanism, and the same code. The 8051 has many memory address spaces, and more than one of them can be banked, or paged, or segmented; whatever term you prefer. I can NOT use banking because I switch memory operation between 16 bit (SRAM) 22 bit (flash). If I were to use banking I would need to control the bits beyond 16 in "fast SRAM mode". I am working with a data set that is set up by the customer using a PC program that store the processed result of the customers input as structs. Then all the data is transferred to the '51 to be processed there. I acces this data for a second or two an hour, so I have no particular interest in the efficiency of that process. This can not be done 'streamlined since the endianness is "wrong". However when the interesting (selected) part is transferred to a SRAM (and the endianness corrected), I need to run absolute full speed (some assembler to achieve this). The PC people balk at having to make corrections to cut where 64k is crossed. the whole shebang does not have structs that are particularily big; but the build is struct a * struct b * struct b % struct b struct b * struct c * struct c * struct c etc for about 12 levels As you see, the recalculation, in order to push the crossing struct up n bytes, would be a total nightmare. Erik