This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Header file inclusions

After reading a post, I embarked on another long rambling about #included header files (a topic that keeps popping up)

Reference Post: http://www.keil.com/forum/docs/thread14314.asp
(And also a whole bunch of those #include/header file questions)

On that post, Per was on the correct path as far as inclusion (header) files.

Also, Andy is right when he explains about the function prototype 'extern' being used for completeness. Be complete.

I disagree with the questions on "slow computer", "max optimization", or the regularity of altering a single uber-header file simply because compile time on the computer. The compiler time should not a factor in building embedded code: GET A FASTER PC if you have a problem with that. (And if you like to compile after every few added lines, the you are simply hacking code and not really thinking about what you are doing during your code-monkey time).

The question of "small" or "large" program means nothing to me because even the 'small' programs will either have a single main.c file (module) or be broken into various .C modules--even for a small program. A large program will have many modules: if it doesn't then you have a serious problem with your code-monkey work anyway. (I've seen horror source code: a single .C file that contained EVERYTHING and had 10-15 conditional compile options---major retards worked on that piece of ___ ).

All .C source file shall have an associated .H file:

SPI.C shall also have a SPI.H file that SPI.C includes
UART3.C shall also have a UART3.H file that UART3.C includes
RoadKill.C shall also have a RoadKill.H file that RoadKill.C includes
main.c shall also have a main.H file that main.c includes

If RoadKill.C needs access to SPI.C functionality, RoadKill.C shall include the SPI.H file for accessors and/or mutators or its SPI specific #defines.

If main.c needs RoadKill.C functionality and SPI functionality, then main.c shall also include both of those associated .H files (RoadKill.H and SPI.H).

All common .H files such as CPU reg definitions are part a the standard inclusion section. In that standard inclusion section, things that globally affect the system should go into that file be included in all modules. I have a header file called Logic.H which does some fairly simple things like the definition of TRUE and FALSE. Another it dTypes.H which has the typedfes for the data types ( #typedef unsigned char u8; etc. ).

Any global data should have its own "global_data.H" file (I use "gData.H" as my 'global data' header file (which is as empty as I can get it). I use main.C to ALLOCATE the data space, while all other modules REFERENCE (via 'extern') to the data-stores ("variables" for you non-engineer types).

Some people use alternate conditional compile pre-processor parameters. (I do not use them for code inclusion or exclusion)

#ifdef INCLUDE_DEBUG_CODE

  printf( "Code-Monkey Testing ID# %d) End-of-Conversion failed", debug_number );
  debug_number++; // bump testing value

#endif

ALL 'debug' code should be removed prior to qualification (which I'm sure you all do before you release it).

--Cpt. Vince Foster
2nd Cannon Place
Fort Marcy Park, VA

<< more rambling to follow >>

Parents
  • Per,

    I totally agree.

    I have ALL of my source files in a single project directory, and avoid at all costs the inclusion of intrinsic compiler tool files:

    I avoid this:
    #include <stdio.h>
    
    and will copy it over to the project folder and do this:
    
    #include "stdio.h" // I don't recall ever actually using stdio.h,
                       //  but this is just an example
    

    I also use the Keil's ability to generate the make batch files and 'modify' them to meet my source-code standards.

    This way I can have ALL files needed for a complete build in one directory set. Create a CD from it. Bring it over to the Qualified Station and before performing the formal ATP (Acceptance Test Procedure) I build the CSCI ("Computer Software Configuration Item"---the executable file) from that station using the CD's source files.

    Even when I copy an intrinsic <library.h> file, I will usually re-write the file to 'my standards' so I can understand what is going on, and then trust the code. I just (about 15 minutes ago), saw that a vendor supplied library utility I was going to use that did this:

       temp = (new_reg_value) & (MASKING_VALUE)
    
       // where temp was a u32
       //   new_reg_value is a u32
       //   masking_value is a u16 value
       //  so the translation is this:
    
       (u32)temp = (u32)(new_reg_value) & ((u16)MASKING_VALUE);
    
       // the MASKING_VALUE is 0xFFFF
    
       (u32)temp = (u32)(new_reg_value) & (u16)0xFFFF;
    
    
       // This assumes that the compiler tool will mask
       // off the leading upper 16-bits and act like this:
    
       (u32)temp = (u32)(new_reg_value) & (u32)0x0000FFFF;
    
    

    So now I don't trust their library, and will most likely re-write the functions I need from it.

    But I digress as usual.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

Reply
  • Per,

    I totally agree.

    I have ALL of my source files in a single project directory, and avoid at all costs the inclusion of intrinsic compiler tool files:

    I avoid this:
    #include <stdio.h>
    
    and will copy it over to the project folder and do this:
    
    #include "stdio.h" // I don't recall ever actually using stdio.h,
                       //  but this is just an example
    

    I also use the Keil's ability to generate the make batch files and 'modify' them to meet my source-code standards.

    This way I can have ALL files needed for a complete build in one directory set. Create a CD from it. Bring it over to the Qualified Station and before performing the formal ATP (Acceptance Test Procedure) I build the CSCI ("Computer Software Configuration Item"---the executable file) from that station using the CD's source files.

    Even when I copy an intrinsic <library.h> file, I will usually re-write the file to 'my standards' so I can understand what is going on, and then trust the code. I just (about 15 minutes ago), saw that a vendor supplied library utility I was going to use that did this:

       temp = (new_reg_value) & (MASKING_VALUE)
    
       // where temp was a u32
       //   new_reg_value is a u32
       //   masking_value is a u16 value
       //  so the translation is this:
    
       (u32)temp = (u32)(new_reg_value) & ((u16)MASKING_VALUE);
    
       // the MASKING_VALUE is 0xFFFF
    
       (u32)temp = (u32)(new_reg_value) & (u16)0xFFFF;
    
    
       // This assumes that the compiler tool will mask
       // off the leading upper 16-bits and act like this:
    
       (u32)temp = (u32)(new_reg_value) & (u32)0x0000FFFF;
    
    

    So now I don't trust their library, and will most likely re-write the functions I need from it.

    But I digress as usual.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

Children
  • "So now I don't trust their library, and will most likely re-write the functions I need from it."

    No. What you are saying is that you do not trust the compiler to do what it must do to follow the standard, i.e. make sure that both sides of an operator are the same size, and to know what to do with the smaller operand if the larger operand is signed or unsigned.

    In this case, I am always allowed to do:

    my_128_bit result = my_64_bit_value & 2;
    


    The compiler is required to make sure that the "2" is correctly expanded to perform a masking of the 64-bit operand.

    And the compiler is required to make sure to expand the 64-bit intermediate result into a 128-bit value (while taking care of sign extension or zero-fill) before assign to the 128-bit target.

    If you distrust the compiler to manage this, then your only option is to go full assembler.

    I would be more worried if a 16-bit variable got anded by a 32-bit constant bitmask. I would always wonder if that 16-bit variable would possibly fail to store some important bits used somewhere else in the program.

    But you are correct in form: Clearly writing the code with same size all the way will produce the same result, but will at the same time help indicating that the developer wasn't just lucky that the language standard is well thought-out, but decided the size of the variables and bitmasks based on real requirements.

    Nothing as bad as finding code originally written with a 16-bit bitmap and then later modified to fit more flags, but where some parts of the code may have been forgotten resulting in unwanted pruning of new flags when some parts of the code gets called.

    For some reason the language designers chose to give us named data types - maybe they didn't want us to duplicate data type declarations all over the source and later forgot to change all declarations when a change is required of the data type. If nothing else, we can rename our modified datatype and get all affected parts of the code to produce compilation errors for us to investigate.

  • Per,

    Right again. I don't trust compilers either. So that is why I try to make the code as complete as possible.

    In your example of

    my_128_bit result = my_64_bit_value & 2;
    

    One would assume that the compiler should handle the situation, but i would have written it as:

    my_128_bit result = (u128)( ((u128)my_64_bit_value) & ((u128)2) );
    


    I can over-use type-casting, but it makes me feel better.

    My needs from the library are limited (ARM w/ STMicro's library set), and I have been re-writing some of the supplied functions as I need them so I have 'total' control of the software.

    The library is very useful, and I use their functions until the code works as close as I need it to, then I go back and 'optimize' the library functions into the modules that use them. Then I'll have exactly what is needed and not depend upon an outside source's work.

    This way, when the software is finished, I won't need the vendor's library set, and will function using more direct 'standardized' Cortex M-3 mapped SFRs accessing, instead of the generic methods the vender uses... which may have side-effects; I worry about that.

    My biggest concern has to do with library updates from both Keil and STMicro. But once I get my code-monkey work running, it should be free from any updates in their source code. The concern is how the Keil compiler itself may then deal with the updated libraries and if the core compiler updates are still compatible with my rendition of the re-worked library source.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA