Hi I am using Keil compiler for 8051 development. i use function pointer throught in my program using code banking. i have 512 k of external ROM with 32 k of 16 banks defined. i went through web and come to know that function pointer can be used with code banking but the condition is that all related files need to be kept in common area or same bank. and also use OVERLAY. now i have problem that it is pretty much difficult to keep all related function pointer files in one bank. i have too many files which gives code overflow if i keep them in same bank. please reply as soon as possible and also explore more this OVERLAY with this function pointer and code banking. regards Have a nice day Niraj Patel
So, let's see: Your code is too big to fit into the 64K code size limit of the 8051 architecture, so you've gone to code banking. Now, your code is too big to fit within the limitations of code banking! Don't you think it might be time to reconsider the appropriateness of an 8051 to this particular project...? At least, take a look at some of the 8051s with extended address space...
It would be valuable to have a tool that can analyze the source for locality of reference, and optimize the bank assignments accordingly. Currently, this process must be done purely manually. For example, let's say I have modules C, S1, S2, T1, and T2. It just so happens that S1 calls a lot of stuff in S2, and vice versa, but not much of anything from T1/T2. Conversely, T1/T2 are tightly bound to each other, but have little to do with S1/S2. Everybody uses module C. The ideal bank assignment here will be C in the common bank, S1/S2 in bank 1, and T1/T2 in bank 2. If you move things around, then extra stubs get generated in the common bank to call all the extra functions. This can be a significant amount of space. It's possible to eyeball the code and get a rough bank assignment by "just knowing" which modules are bound. But it would be much better to have a tool that uses the linkage information to actually calculate the size of the interface between modules, and minimize the inter-bank calls.
It's possible to eyeball the code and get a rough bank assignment by "just knowing" which modules are bound. But it would be much better to have a tool that uses the linkage information to actually calculate the size of the interface between modules, and minimize the inter-bank calls. This would suffer from "doing only half the job syndrome". The linker can know the width of the interfaces, i.e. the number of inter-bank code paths, but it can't minimize the inter-bank calls. To do that latter, it'd also have to know how often each of those potential code paths is going to be taken. And of course, the OP's program would almost certainly drive any such automatism into complete inefficiency, what with its massive use of function pointers. To the linker, that would tend to look like "everything calls everything". No wonder the trampoline collection grows beyond all bounds.
"It would be valuable to have a tool that can analyze the source for locality of reference, and optimize the bank assignments accordingly. Currently, this process must be done purely manually." There are various tools around that analyse code for various "metrics"; eg, http://www.mccabe.com/iq.htm I wonder if such things might be able to provide useful information? Trouble is, they tend to be very expensive - so the money might be better spent in moving to an ARM or something... ;-)
so the money might be better spent in moving to an ARM or something... ;-) Let me sugges a "something": a seriously pumped-up 8051. DS80C390 contiguous mode, for one, would seem to be exactly the kind of remedy for that kind of problem. It has its own little quirks, but compared to doing 8 x 32K code banking on a classic 8051, it's heaven on earth.
The most valuable metric would probably be inter-bank call count. I haven't thought about it in detail; maybe you also want to measure parameter space needed. But the primary metric is probably just calls out of the current bank. So, the tool would have a big 2D matrix, indexed by module (segment), with a count of all calls to other segments. Each segment must be assigned to a bank. Minimize the total cost by changing bank assignments to reduce the inter-bank count. For extra credit, allow manual "nailing down" of bank assignment for particular modules that have particular requirements (say, a need for speed that forces them into the common bank). I'm sure someone's project metrics count entry points, but it seems unlikely that they'd organize modules into groups in this manner. In one of my particular cases, I've only got about 90KB of code, hardly a reason to move to an ARM7 or whatever. (Not that it's a possibility, because many hundreds of thousands of deployed units exist with the current processor.) But even with a small program, it's still an annoying manual process to assign .c files to banks. Add some code, or even debug statements, the bank overflows, and the linker just quits. There's no automatic reassignment or balancing at all.
uVision already has a feature that can tell you most of this kind of thing. The source browser creates a database of functions and callers and callees. To view it...
The fundamental limitation that I find with the Sourec Browser is that there's no way to export any results from it. :-( It all has to be done interactively in the GUI - and is all lost when you close the GUI. :-( MSVC used to have a command-line tool for querying their browser database ("bscdump", IIRC) - something like that would be really useful!
hi for this function pointer with code banking problem ,i come to know that OVERLAY directive can Generate table which has return address of function pointers. so any body know how to set this OVERLAY directive in keil? regards Niraj