This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

C51: Is this a compiler bug ?

... or a misunderstanding on my part ?

From string "\x0CTUV", the compiler generates 0x0C 0x55 0x56 0x57 0x00

From string "\x0CABC", the compiler generates 0xCA 0x42 0x43 0x00 ... rather than the expected 0x0C 0x41 0x42 0x43 0x00

I thought the \x escape sequence in a string instructed the compiler to encode the very next two characters as a hexadecimal byte.

Am I missing something ?

Parents
  • An 8-bit processor with 8-bit characters can not swallow more than 8 bits of data. Because of this, you will get into undefined country as soon as you try to specify a hexadecimal value larger than is meaningful on the compiler.

    In short: You do not know exactly what will happen. And because of this, you should not make any assumptions about number of characters that the compiler will process but should force a break after the end of the constant.

    There is no rule that say that the compiler should consume any leading zero digits and then consume exactly bits/4 hex digits and then break. It can consume all digits, just doing n *= 16 + digit and emit the least significant 8 bits. Or it can consume all digits but emit the first 8 bits. Or it can decide to break as soon as it gets an overrun. That is why assumptions are bad. You test on one compiler and make the assumption that you have found a magic rule that is generally applicable.

    If there are no hard rule that a compiler _must_ behave in a specific way, then you should do your best to stay away from this implementation-specific zone. It will bite. It may bite when you switch to a different compiler. But it may just as well bite if you cange a compilation flag or update to the next release of a compiler.

    You should get the language standard, and spend some time with it. The standard itself says in paragram 6.4.4.4 (my emphasis):

    Point 6:
    The hexadecimal digits that follow the backslash and the letter x in a hexadecimal escape sequence are taken to be part of the construction of a single character for an integer character constant or of a single wide character for a wide character constant. The numerical value of the hexadecimal integer so formed specifies the value of the desired character or wide character.

    Point 7:
    Each octal or hexadecimal escape sequence is the longest sequence of characters that can constitute the escape sequence.

    From the above, it could be "assumed" that the Keil compiler is buggy and should have consumed all characters. But note point 9:
    The value of an octal or hexadecimal escape sequence shall be in the range of representable values for the type unsigned char for an integer character constant, or the unsigned type corresponding to wchar_t for a wide character constant.

    I.e. it is up to you to make sure that you do not feed the compiler more digits than what will fit in a character. You made an invalid assumption and broke a constraint specified in the language standard. That left you in limbo land.

Reply
  • An 8-bit processor with 8-bit characters can not swallow more than 8 bits of data. Because of this, you will get into undefined country as soon as you try to specify a hexadecimal value larger than is meaningful on the compiler.

    In short: You do not know exactly what will happen. And because of this, you should not make any assumptions about number of characters that the compiler will process but should force a break after the end of the constant.

    There is no rule that say that the compiler should consume any leading zero digits and then consume exactly bits/4 hex digits and then break. It can consume all digits, just doing n *= 16 + digit and emit the least significant 8 bits. Or it can consume all digits but emit the first 8 bits. Or it can decide to break as soon as it gets an overrun. That is why assumptions are bad. You test on one compiler and make the assumption that you have found a magic rule that is generally applicable.

    If there are no hard rule that a compiler _must_ behave in a specific way, then you should do your best to stay away from this implementation-specific zone. It will bite. It may bite when you switch to a different compiler. But it may just as well bite if you cange a compilation flag or update to the next release of a compiler.

    You should get the language standard, and spend some time with it. The standard itself says in paragram 6.4.4.4 (my emphasis):

    Point 6:
    The hexadecimal digits that follow the backslash and the letter x in a hexadecimal escape sequence are taken to be part of the construction of a single character for an integer character constant or of a single wide character for a wide character constant. The numerical value of the hexadecimal integer so formed specifies the value of the desired character or wide character.

    Point 7:
    Each octal or hexadecimal escape sequence is the longest sequence of characters that can constitute the escape sequence.

    From the above, it could be "assumed" that the Keil compiler is buggy and should have consumed all characters. But note point 9:
    The value of an octal or hexadecimal escape sequence shall be in the range of representable values for the type unsigned char for an integer character constant, or the unsigned type corresponding to wchar_t for a wide character constant.

    I.e. it is up to you to make sure that you do not feed the compiler more digits than what will fit in a character. You made an invalid assumption and broke a constraint specified in the language standard. That left you in limbo land.

Children
  • From the above, it could be "assumed" that the Keil compiler is buggy and should have consumed all characters. But note point 9:
    The value of an octal or hexadecimal escape sequence shall be in the range of representable values for the type unsigned char for an integer character constant, or the unsigned type corresponding to wchar_t for a wide character constant.

    So perhaps the compiler's logic in this matter is, "consume all characters up to, but not beyond the range of an unsigned char".

    I guess I can live with that and I agree that no assumptions should be made in this area with different compilers.

    I appreciate the feedback.

  • The point is that you should not concern yourself with the compiler's logic - you should concern yourself with ensuring that your source text is completely unambiguous and, therefore, not subject to any misinterpretation by any compiler logic!