This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

#include of C file and listing/emulation

I have several very long subroutines, 1500+ lines each, which consist of 20-50 discrete portions. For stack purposes, I don't wish to make subroutines out of each portion, but for clarity I wanted to separate the portions. I moved each portion, typically the body of an if or while loop, into its own .c file and #included them in place, changing

if (a > 100) {
statement1;
statement2;
...
statement 50;
}; // if


into

if (a > 100) { ; main.c, line 30
#include body1.c ; main.c, line 31
}; // if ; main.c, line 32

with file "body1.c" containing the statements, without any subroutine syntax.

statement1; ; body1.c, line 1
statement2; ; body1.c, line 2
...
statement 50; ; body1.c, line 50


The compiler does what it is supposed to do as far as the executable is concerned; it generates the correct opcodes as if it were all inline in a single file.

The problem I have is with emulation, but the reason can be seen by looking at the generated list file. The line number information for the included .c files is inappropriately constructed. In the above example, the listing file attributes the HLL statements to main.c, lines 30, 1, 2, ..., 50, 32. What would be desired would be to attribute them to main.c line 30, body1.c lines 1, 2, ..., 50, and then main.c line 32.

When enulating, the debug information for HLL presents lines from the parent file, instead of from the included file, making it difficult to debug within the included files.

Has this been seen before? Is there anything that I can do? Is there anything that you can do?

Parents
  • "The "PREPRINT" compiler directive may get you the preprocessor output."

    Yes, it does.
    I was going to leave mentioning that until we got a really good reason for this bizarre desire to avoid function calls!!

    But, since you mention it:
    The output, by default, has the same name is the input source file, with a .i filetype.
    I've suggested before adding .i to the list of 'C' source file types for your project - then it will be syntax coloured.

    "The manual says 'with macros expanded', so I'm not certain if that includes all preprocessing.)"

    Yes, it is the output of the preprocessor - so all macros will be expanded, all comments will be removed, all whitespace will be collapsed, only the "active" parts of conditionals will be present, etc, etc.

    This all makes the .i file pretty hard to read, so I say again: Why this aversion to functions??????
    I think these efforts are misguided and pointless.

    "The #line directive allows you to reassign line numbers and source file name, so you might be able to trick the debug information by including that directive in your sub-files."

    No, it doesn't. :-(
    it just allows the compiler to give meaningful file+line references in its error messages.
    I think the loss of debug info is an object format thing - OMF51 just doesn't support it.

Reply
  • "The "PREPRINT" compiler directive may get you the preprocessor output."

    Yes, it does.
    I was going to leave mentioning that until we got a really good reason for this bizarre desire to avoid function calls!!

    But, since you mention it:
    The output, by default, has the same name is the input source file, with a .i filetype.
    I've suggested before adding .i to the list of 'C' source file types for your project - then it will be syntax coloured.

    "The manual says 'with macros expanded', so I'm not certain if that includes all preprocessing.)"

    Yes, it is the output of the preprocessor - so all macros will be expanded, all comments will be removed, all whitespace will be collapsed, only the "active" parts of conditionals will be present, etc, etc.

    This all makes the .i file pretty hard to read, so I say again: Why this aversion to functions??????
    I think these efforts are misguided and pointless.

    "The #line directive allows you to reassign line numbers and source file name, so you might be able to trick the debug information by including that directive in your sub-files."

    No, it doesn't. :-(
    it just allows the compiler to give meaningful file+line references in its error messages.
    I think the loss of debug info is an object format thing - OMF51 just doesn't support it.

Children
  • I started to write a lengthy justification for my position, but the details are really unnecessary.

    Functions are useful, but my toolbox has more than one tool. Macros and the #include of a code file also have their places.

    C is useful, but Assembly still has its place.

    Floating point is useful, but the unsigned char and the signed long also have their places.

    I have only given the briefest of description of my situation, as related to my dificulty. I have multiple, in my opinion valid reasons for trying to use #included code instead of functions:

    1) stack usage for calls
    2) stack usage for variable passing (see 3)
    3) continued access to private, overlain variables
    4) efficient overlay of variables
    5) wish to better expose the symetry and heirarchy of the code
    6) wish to hide levels of detail.

    I apologize if your or my written comments have caused either to perceive negative attitudes. Nuance and inflection are sometimes lost in print, leading to implication/inference mismatches.

  • Your reference to stack usage for variable passing gives function() as an example. That won't use any stack for variables.

    Even if it did pass a variable, Keil will assign it to a register, starting with R7, depending on how many variables are passed. You should read the manual on this. Even though you sound like you have used assembler with C, you keep posting about variables on the stack, which Keil does not do.

  • Again, I apologize for being imprecise. At times I have been interchanging "stack" with "internal memory". Also, my examples have been pseudo-code, giving just enough detail to show my issue, without the need to show the entire large routines.

    I am not looking only to break one file into several contiguous pieces, but to also break it into several nested levels of #includes. If I were to instead replace them with functions, that would

    a) add 2 bytes of stack for each level of nesting.
    b) overlaying of variables used by each level of nesting would be placed after the longest usage for each level
    c) require the promotion of overlayable, private temporary variables to non-overlayable global variables or include inefficiencies over variable passing.

  • It seems to me that what this boils down to is that if you have to resort to these methods to save a few bytes of stack space then you are probably using the wrong processor!

    Where at all possible, good coding practice should be followed unless it is absolutly necessary to do otherwise - of course that includes having a logical program structure.

    Of course, this could all be solved if C51 allowed an "inline" keyword. Although the Keil compiler and linker tend to have features that make programs smaller (e.g. identifying common block subroutines), it is occasionally very useful to be able opposite and to "inline" a function as some other compilers (non-8051) permit.

  • This project on this platform has been in active development for close to five years now, with multiple feature additions, both minor and major, during that time. Realistically, it is almost too much too ask of the platform, and while successor architecture is on the way, the decision is that this guy must, at all costs, have the improvements now.

  • A lot of new 8051 derivatives have come out in 5 years. Are you up to date ?

  • this is continuing development on an existing hardware with I don't know how many thousands in the field. The hardware is set. While new platforms are being developed, this platform is being squeezed to beyond an inch of its life.

    Designed over five years ago, it uses a Dallas 80c320 running at 33 MHz. The interrupts are all in Assembly, hand coded for speed and code size, using register banks and multiple data pointers. Interrupt stack usage is detailed. Code execution stack usage is detailed. Internal emmory variables are overlain and detailed. At this point, to increase any of their use will require a decrease somewhere else, a sacrifice I am not willing to make unless I must. I am using both code banking and multiple sets of data banking.

    The #PREPRINT produced .i file looks like it will serve; macros and #includes are expanded to produce a flat file which will allow meaningful debug information and references to source code files. The unsymetry of removing white space, which undoes indentation, while retaining the blank lines after removing comments, does look a little odd.

  • For what it's worth, the standalone C preprocessor 'cpp' maintains whitespacing and would leave your "flat" file much more readable. Free (i.e., FSF/GNU-ish) and commercial versions are available.