unsigned char aa,bb,cc,dd; aa = 0xab; bb = 0xcd; cc = (aa+bb)%255; //aa+bb=0x178 dd = (unsigned char)((aa+bb)%255);
when debug, youcan see that the result: cc is 0x78, dd is 0x79. In fact, cc and dd should be 0x79.
I debugged in C51 9.60, 9.03. Both had the same output.
I tried in VC2010, TI CCS3.3, both can get the correct result, 0x79.
You got all of that backwards, I think.
Modern C would meet the OP's expectations. Any C compiler respecting any kind of standard, all the way back to K&R1, would have (aa + bb) equal to 0x178, because of standard integer conversion of sub-integer values. That % 255 is, just as universally, 0x79. That cast (either implicitly or explicitly) to unsigned char is still 0x79.
The only versions of C that would get a different result for that are explicitly non-compliant ones, e.g. Keil C51 in its default setting, where it does not apply standard integer conversions.
Your logic is good. That being said, I still expect the Keil C++ compiler (at least the V5 compiler) to generate the answer of 0x78 for what was presented by the OP and not 0x79.
The expectation of getting 0x78 is in direct violation of all applicable standards. For both C and C++. For any standard revision of either of them. Standard integer conversions have been part of C since well before the first C standard.
I.e. the language does, in fact, guarantee the result in this case, even though it might be less than obvious to the casual reader. Getting something else can be compiler bug; but that depends on wether the compiler in question was run in a mode that promises to respect the standard(s). If it promised no such thing, breaking the promise isn't a bug.