We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
I have several very long subroutines, 1500+ lines each, which consist of 20-50 discrete portions. For stack purposes, I don't wish to make subroutines out of each portion, but for clarity I wanted to separate the portions. I moved each portion, typically the body of an if or while loop, into its own .c file and #included them in place, changing if (a > 100) { statement1; statement2; ... statement 50; }; // if into if (a > 100) { ; main.c, line 30 #include body1.c ; main.c, line 31 }; // if ; main.c, line 32 with file "body1.c" containing the statements, without any subroutine syntax. statement1; ; body1.c, line 1 statement2; ; body1.c, line 2 ... statement 50; ; body1.c, line 50 The compiler does what it is supposed to do as far as the executable is concerned; it generates the correct opcodes as if it were all inline in a single file. The problem I have is with emulation, but the reason can be seen by looking at the generated list file. The line number information for the included .c files is inappropriately constructed. In the above example, the listing file attributes the HLL statements to main.c, lines 30, 1, 2, ..., 50, 32. What would be desired would be to attribute them to main.c line 30, body1.c lines 1, 2, ..., 50, and then main.c line 32. When enulating, the debug information for HLL presents lines from the parent file, instead of from the included file, making it difficult to debug within the included files. Has this been seen before? Is there anything that I can do? Is there anything that you can do?
"Then there would be a stack usage involved in the call to function body1()." Yes, but why is that such a big issue for you? If it really is such an issue, should you be using 'C' at all?
The "PREPRINT" compiler directive may get you the preprocessor output. (The manual says "with macros expanded", so I'm not certain if that includes all preprocessing.) If not, then you could use another C compiler or external preprocessor such as m4 to produce the flat file for later processing by the compiler. The #line directive allows you to reassign line numbers and source file name, so you might be able to trick the debug information by including that directive in your sub-files. A function call to a void (void) function will cost you two bytes of stack for the return address (and nothing else). Interestingly, I don't see an inline directive, nor does the compiler seem to automatically inline single uses of static functions. Perhaps Jon can shed some light there. If those files use temporary local variables, it might actually save you some space to declare them as functions. If each of 30 routines needs a 16-bit temporary, then the parent routine will allocate 60 bytes. If you make 30 calls, the overlay processing will figure out that it doesn't need all 30 temps at the same time, and overlay them, for only two bytes of usage. If it's absolutely crucial that you not affect the stack pointer for some reason, you should be aware that sometimes the code generator will sometimes temporarily use a few bytes of stack for its own purposes, on top of any stack it may use for calls to compiler internals. Your code is probably already using more than two bytes of stack to implement that function. If you don't like function calls, you won't like what happens to the code if you turn the optimizer-for-size loose. It's pretty ruthless about hunting down any similar instruction sequences it can find longer than a procedure call/return, and turning them into functions with ACALL / RET. A rough rule of thumb from my code seems to suggest about two bytes of instruction per LOC. Four bytes of function call overhead is fairly small compared to the 100+ bytes to actually perform the action.
I will try your suggestions. The overlaying process leaves me with mixed feelings. I have found that it is usually good at recognizing and overlapping temporary variables in small scopes inside a function. A large issue I do have with the overlay process can be summed up with this example: void main(void) { if (test) { unsigned char i; statements; sub1(); } else { unsigned long j; statements; sub2(); }; // if } void sub1(void) { unsigned long k; statements; } void sub2(void) { unsigned char l; statements; } The overlay process will overlay mutually exclusive variables within a subroutine, and will overlay mutually exclusive subroutine variables, but it will not overlay a subroutine with its parent if the subroutine call was made at a point that is not the maximum scope. i and j are mutually exclusive variabales within a subroutine. They are overlayed within the same 4 bytes. k and l are in mutually exclusive subroutines called by main. They are overlayed within the same 4 bytes. This stub uses 8 bytes, i and j overlain at offset 0 and k and l overlain at offset 4. However, sub1() uses k at a point when only 1 byte (i) is used, while sub2() uses l ata a point when 4 bytes (j) are used. The variables could be compacted as var offset length i 0 1 j 0 4 k 1 4 l 4 1 This would use a total of 5 bytes, compared to the 8 of above. A part of my issue with trying to accomplish what I am stems from my desire to minimize stack use (which uses internal memory) while overlaying temporary variable usage as best I can. Thus, I have several instances where multiple mutually exclusive scopes (if and switch statments) with at different levels of memory usage force me to have a single, subroutine, instead of multiple parallel and/or nested subroutines.
"The "PREPRINT" compiler directive may get you the preprocessor output." Yes, it does. I was going to leave mentioning that until we got a really good reason for this bizarre desire to avoid function calls!! But, since you mention it: The output, by default, has the same name is the input source file, with a .i filetype. I've suggested before adding .i to the list of 'C' source file types for your project - then it will be syntax coloured. "The manual says 'with macros expanded', so I'm not certain if that includes all preprocessing.)" Yes, it is the output of the preprocessor - so all macros will be expanded, all comments will be removed, all whitespace will be collapsed, only the "active" parts of conditionals will be present, etc, etc. This all makes the .i file pretty hard to read, so I say again: Why this aversion to functions?????? I think these efforts are misguided and pointless. "The #line directive allows you to reassign line numbers and source file name, so you might be able to trick the debug information by including that directive in your sub-files." No, it doesn't. :-( it just allows the compiler to give meaningful file+line references in its error messages. I think the loss of debug info is an object format thing - OMF51 just doesn't support it.
"The overlay process will overlay mutually exclusive variables within a subroutine" The overlaying works by block scope - which is not limited to just functions. eg,
int i,j : do stuff with i : do stuff with j :
{ int i; : do stuff with i : } { int j; : do stuff with j : }
I started to write a lengthy justification for my position, but the details are really unnecessary. Functions are useful, but my toolbox has more than one tool. Macros and the #include of a code file also have their places. C is useful, but Assembly still has its place. Floating point is useful, but the unsigned char and the signed long also have their places. I have only given the briefest of description of my situation, as related to my dificulty. I have multiple, in my opinion valid reasons for trying to use #included code instead of functions: 1) stack usage for calls 2) stack usage for variable passing (see 3) 3) continued access to private, overlain variables 4) efficient overlay of variables 5) wish to better expose the symetry and heirarchy of the code 6) wish to hide levels of detail. I apologize if your or my written comments have caused either to perceive negative attitudes. Nuance and inflection are sometimes lost in print, leading to implication/inference mismatches.
I appologize for not being concise with my wording. Yes, that is how the overlay works. The point I was trying to bring up, however was in the use of functions in different blocks.
{ char i; long k; } { long j; char l; }
{ char i; Sub1(); } { long j; Sub2(); } void Sub1(void) { long k; } void Sub2(void) { char l; }
Your reference to stack usage for variable passing gives function() as an example. That won't use any stack for variables. Even if it did pass a variable, Keil will assign it to a register, starting with R7, depending on how many variables are passed. You should read the manual on this. Even though you sound like you have used assembler with C, you keep posting about variables on the stack, which Keil does not do.
Again, I apologize for being imprecise. At times I have been interchanging "stack" with "internal memory". Also, my examples have been pseudo-code, giving just enough detail to show my issue, without the need to show the entire large routines. I am not looking only to break one file into several contiguous pieces, but to also break it into several nested levels of #includes. If I were to instead replace them with functions, that would a) add 2 bytes of stack for each level of nesting. b) overlaying of variables used by each level of nesting would be placed after the longest usage for each level c) require the promotion of overlayable, private temporary variables to non-overlayable global variables or include inefficiencies over variable passing.
It seems to me that what this boils down to is that if you have to resort to these methods to save a few bytes of stack space then you are probably using the wrong processor! Where at all possible, good coding practice should be followed unless it is absolutly necessary to do otherwise - of course that includes having a logical program structure. Of course, this could all be solved if C51 allowed an "inline" keyword. Although the Keil compiler and linker tend to have features that make programs smaller (e.g. identifying common block subroutines), it is occasionally very useful to be able opposite and to "inline" a function as some other compilers (non-8051) permit.
This project on this platform has been in active development for close to five years now, with multiple feature additions, both minor and major, during that time. Realistically, it is almost too much too ask of the platform, and while successor architecture is on the way, the decision is that this guy must, at all costs, have the improvements now.
A lot of new 8051 derivatives have come out in 5 years. Are you up to date ?
this is continuing development on an existing hardware with I don't know how many thousands in the field. The hardware is set. While new platforms are being developed, this platform is being squeezed to beyond an inch of its life. Designed over five years ago, it uses a Dallas 80c320 running at 33 MHz. The interrupts are all in Assembly, hand coded for speed and code size, using register banks and multiple data pointers. Interrupt stack usage is detailed. Code execution stack usage is detailed. Internal emmory variables are overlain and detailed. At this point, to increase any of their use will require a decrease somewhere else, a sacrifice I am not willing to make unless I must. I am using both code banking and multiple sets of data banking. The #PREPRINT produced .i file looks like it will serve; macros and #includes are expanded to produce a flat file which will allow meaningful debug information and references to source code files. The unsymetry of removing white space, which undoes indentation, while retaining the blank lines after removing comments, does look a little odd.
For what it's worth, the standalone C preprocessor 'cpp' maintains whitespacing and would leave your "flat" file much more readable. Free (i.e., FSF/GNU-ish) and commercial versions are available.
Compiling and linking from the .i file does produce the debug data that I was looking for. It appears that the only way to have the target compile x.c to actively produce x.i, and then compile x.i to produce x.obj causes the linker to attempt to link x.obj twice (compiling x.c produces the x.obj I don't want), which generates a warning. I have not been able to find a way to include it as a compile for the project without also including it as a link for the project. Warnings I can live with.
I have not been able to find a way to include it as a compile for the project without also including it as a link for the project. In uVision2, you can right-click on the source file, select options, and un-select the include in target build. Jon