This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

A C51 Compiler Error!!

When making an expression in C51 with the format shown below:

#define VALUE ((0x3F << 24) | (0x1F << 16) | (0x01 << 1))

Results in a value of 0x00000002

The same results are attained when re-writing the expression as:

#define VALUE ((0x3F << 24) + (0x1F << 16) + (0x01 << 1))

The same result is also generated when re-writing as:

#define VALUE ((0x01 << 1) | (0x1F << 16) | (0x3F << 24))

I have also tried casting the whole expression with (unsigned long) and casting each of the three individual components with (unsigned long) and not seen a change of behavior.

When using an expression similar to the #define on the right hand side of an assignment statement to an unsigned long variable the compiler demonstrates the same incorrect behavior as shown above.

Is this issue something that we can get Keil to fix in C51 and issue an updated compiler?

Michael Karas

  • #define VALUE ((0x3FL << 24) | (0x1FL << 16) | (0x01L << 1))

  • Is this issue something that we can get Keil to fix in C51 and issue an updated compiler?

    It's not a problem with C51. The mathematical operations use int (which on C51 is a 16 bit value) unless it is informed that any value in the expression is something else.

  • The danger with cutting/pasting code from the net is that most hobbyists today use ARM or program a PC. So the int size is 32-bit or even 64-bit. So most sample code on the net assumes at least a 32-bit size for the int type.

    When moving the code to the 8051, lots of interesting problems may result.

  • Per,

    The code in question was not downloaded off the internet. It was original code that I thought would automatically compute using 32-bit values.

    I knew that the C51 compiler uses 32-bit entities to store #define values but forgot about the need to suffix constants with the "L".

    Your point is well taken for when code is acquired in the context that you mention.

    Michael Karas

  • The C language standard focuses fully around the int data type. So any integer evaluation that doesn't involve any long data type will happen at the int size.

    The #define can handle any number of digits, since the #define is just a glorious text replacement mechanism. So even with a an 8-bit processor, it's possible to stringize an 100-digit integer through the preprocessor.

    This issue happens when the C compiler starts to evaluate the "pasted" text, after the preprocessor has performed all substitutions.

    It will be the C compiler that will perform the | and the << operations of the above expression. And even when operating on constant data that can be computed at compile time, the C compiler still need to follow the same rules (signed/unsigned, integer size, evaluation order, ...) as for runtime-computed expression.

    This is where a compiler can help out, by warning about potential truncation of data. Some compilers are better than others at helping out and catch mistakes. I can't see any situation where I would have liked to perform a 24-step shift of a 16-bit integer, so it would have been easy for the compiler to warn.