This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Default type when only 'unsigned' is stated

Hey

I looking through some code, where the type of a parameter in a function call is stated as unsigned.

The data type (char, int, long) is not stated.
Does any one know which type is assumed when not stated.

Parents
  • But where do you see a contradiction?

    Note that the text does not say anything about an int having to fit in a single register or in the accumulator. Or that a multiply or add must be possible with a single machine instruction. So there isn't any contradiction.

    The text talks about a natural (not native) data type large enough to store at least the range specified by INT_MIN and INT_MAX in <limits.h>? The standard explicitly says that INT_MIN and INT_MAX must span (at least) -32767 to 32767. So the natural (not native) size for an 8-bit processor would then be 16 bits since that is the most efficient data size that fulfills the INT_MIN/INT_MAX requirements.

    Letting the 8-bitter have a 32-bit int would not be a natural choice since it would require a lot of extra instructions that are not required to fulfill the standard. A 16-bit int is a natural choice since it is the simplest-to-implement and fastest data type that does fulfill the standard. An 8-bit int is not a natural choice since it is a size that explicitly violates the requirements of the standard.

Reply
  • But where do you see a contradiction?

    Note that the text does not say anything about an int having to fit in a single register or in the accumulator. Or that a multiply or add must be possible with a single machine instruction. So there isn't any contradiction.

    The text talks about a natural (not native) data type large enough to store at least the range specified by INT_MIN and INT_MAX in <limits.h>? The standard explicitly says that INT_MIN and INT_MAX must span (at least) -32767 to 32767. So the natural (not native) size for an 8-bit processor would then be 16 bits since that is the most efficient data size that fulfills the INT_MIN/INT_MAX requirements.

    Letting the 8-bitter have a 32-bit int would not be a natural choice since it would require a lot of extra instructions that are not required to fulfill the standard. A 16-bit int is a natural choice since it is the simplest-to-implement and fastest data type that does fulfill the standard. An 8-bit int is not a natural choice since it is a size that explicitly violates the requirements of the standard.

Children
  • Use a [fairly] standard typedef for declarations and make notes within that file if the processor has restrictions...

    /*--------------------------------------------------------------------------.
    ;   declare my own typedeefs of the standard data types                     ;
    '--------------------------------------------------------------------------*/
    typedef          unsigned char   u8;    // CAUTION 16-bit only machine ... etc.
    typedef            signed char   s8;    // CAUTION 16-bit only machine ... etc.
    typedef          unsigned int    u16;
    typedef            signed int    s16;
    typedef          unsigned long   u32;
    typedef            signed long   s32;
    typedef          float           f32;
    
        /*----------------------------------------------------------------------.
        ; if so desired, the volatile data-type is defined                      ;
        '----------------------------------------------------------------------*/
    typedef volatile unsigned char   vu8;   // CAUTION 16-bit only machine ... etc.
    typedef volatile   signed char   vs8;   // CAUTION 16-bit only machine ... etc.
    typedef volatile unsigned int    vu16;
    typedef volatile   signed int    vs16;
    typedef volatile unsigned long   vu32;
    typedef volatile   signed long   vs32;
    typedef volatile float           vf32;
    


    And then it becomes rather simple to know the sign and bit-width.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

  • No! Don't rely on notes - code monkeys will miss them!

    Use conditional compilation to ensure that it's right!

    #if defined COMPILER_A
    // Definitions for Compiler 'A'
    typedef          unsigned char   u8;
    typedef            signed char   s8;
    typedef          unsigned int    u16;
    typedef            signed int    s16;
    typedef          unsigned long   u32;
    typedef            signed long   s32;
    typedef          float           f32;
    
    #elif defined COMPILER_B
    // Definitions for Compiler 'B'
    typedef          unsigned char   u8;
    typedef            signed char   s8;
    typedef          unsigned short  u16;
    typedef            signed short  s16;
    typedef          unsigned int    u32;
    typedef            signed int    s32;
    typedef          float           f32;
    
    #else
    #error Unknown Compiler!
    #endif
    


    Or

    #if defined COMPILER_A
    #include "compiler_a.h"
    
    #elif defined COMPILER_B
    #include "compiler_b.h"
    
    #else
    #error Unknown Compiler!
    #endif
    

    Although, if you're just starting now, it'd make sense to use the C99 standard names rather than u8, s16, etc...

  • Any, I understand that. It's not a 'bad' way to go, especially since code-monkeys have... uhm, lethologically challened at the moment... uhm, "issues."

    I'm just not fond of conditional compilation. I think that this particular conditional compilation would be worth the effort if incorporated properly.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

  • I would limit the conditional to e.g.
    #ifndef KeilC51
    wrong definitions, make your own
    #endif

    that keeps the clutter down.

    and the "make your own" should scare a code monkey to quit

    BTW Vince, I, personally, prefer U8 to u8 ....

    Erik

  • BTW Vince, I, personally, prefer U8 to u8 ....

    Hmmmm, I use ALL_CAPITAL_LETTERS for #defines.

    Granted a typedef is a form of #define, but I, personally, prefer u8 as opposed to U8.

    (After checking, my editor [CodeWright v7.5] recognizes U8 as a typedef, whereas my code does not identify U8 as being typedef'd. Thanks for helping me catch that).

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA