In the following code:-
char a = 0x80; unsigned char b = 0x80; if(a==0x80) printf("a==0x80\n"); if(b==0x80) printf("b==0x80\n");
Jon, I understand what you're saying but if the "0x80" in
char a = 0x80;
(a==0x80)
(a=='\x80')
have you tried disabling the ANSI Integer Promotions?
what value is the "0x80" in (a==0x80) 0x0080 and what about the '\x80' in (a=='\x80') again, 0x0080 These things are ALL integers as per ANSI. 0x80 is 128 or 0x0080. '\x80' is also 128 or 0x0080. The "character" 'A' is 0x0041 and its type is int (not char). If you cast the 0x80 to a char, for example:
(a==(char)0x80)
Incidentally, 0x80 in 8-bit twos-complement represents -128, not -1. -1 is all ones, 0xff. (To convert, invert all the bits and add 1.) The problem here is in the comparison. In the sequence: a = 0x80; // same as a = -128; if (a == 0x80) the 0x80 is a literal constant of type "int", while a is of type signed char. To compare the two, the compiler will convert the signed char to an int. You can represent positive 128 decimal (0x80) in a 16-bit integer, so that's the right side of the comparison. The left side, a, has the value -128. -128 != 128, so the comparison fails. Change the test to either if (a == (signed char)0x80) or if (a == -128) and see what happens. Disabling integer promotion as Andy suggests should make the problem go away as well. (Just remember that integer promotion is standard ANSI behavior.) You might also run into this problem when it comes to procedure parameters, particularly with printf(). Read up on the Keil extensions to printf format specifiers, in particular the ones like %bu (print unsigned byte) instead of just %u.
Thanks all, I understand a lot better now.
Incidentally, 0x80 in 8-bit twos-complement represents -128, not -1. -1 is all ones, 0xff. Errr. That's what I meant. -128, yeah. -128. Jon