Hi,
I made a small program to make the problem clear:
==================
void problem(void) { unsigned char ii; unsigned char jj; unsigned char flag; for (ii=0;ii<0xff;ii++) { for (jj=0;jj<0xff;jj++) { if (ii * jj < 0xff) { if (ii * (unsigned int)jj < 0xff) flag = 1; // ok else flag = 2; // not ok } } } }
=================== I have a double for loop, both loop counters are of type unsigned char. When I do a compare: if (ii * jj < 0xff), the outcome sometimes is wrong (flag = 2). That can be checked with the same expression using an explicit (unsigned int) cast.
The first three combinations which yield an incorrect result are: ii=0x82, jj=0xfd ii=0x82, jj=0xfe ii=0x83, jj=0xfb
I am using the following compiler: C51 COMPILER V9.51 - SN: K1NGC-DAEVIE COPYRIGHT Copyright (C) 2012 ARM Ltd and ARM Germany GmbH. All rights reserved.
on a C8051F587 processor.
Questions: o Do you agree that this is a bug in the compiler? o Is this a known bug?
If I recall it correctly:
The result of "ii * jj" is signed and therefore can be lower than 0xff
Write instead: if ((unsigned int)(ii * jj) < 0xffu)
So it is not a bug. This signed/unsigned rules in C have some known pitfalls.
Thanks a lot for your answer. Actually I should have known that. Probably not used to 2-byte integers anymore. (Luckily I did not make the code; just trying to solve some bugs).
The result of "ii * jj" is signed and therefore can be lower than 0xff That's quite irrelevant, however, because the OP's code doesn't compute (ii * jj). It computes (ii * (unsigned int)jj). By the official rules of standard C, that should be equivalent to
((unsigned int)ii * (unsigned int)jj)
But because the '51 is a really small processor, the default behaviour of C51 is not to perform this kind of "usual arithmetic conversion". You have to turn that on explicitly by a compiler flag to get the standard behaviour.
the OP's code doesn't compute (ii * jj).
Ooops, sorry. I only saw afterward that he did indeed do that.
Ooops, sorry
There's a first!
But x * y when computed as 8-bit parameters to 8-bit result can obviously manage a result less than 255 while the same expression computed with 16-bit resolution is >= 255.
ii=0x82, jj=0xfd = 0x807a if 16-bit and 0x7a if 8-bit ii=0x82, jj=0xfe = 0x80fc if 16-bit and 0xfc if 8-bit ii=0x83, jj=0xfb = 0x8071 if 16-bit and 0x71 if 8-bit
All three cases have an 8-bit result < 0xff. All three cases has an 16-bit unsigned result >= 0xff.
Change the compiler flags to extend the size of all operands to the size of int (i.e. 16-bit) and then both your comparisons will happen based on the 16-bit result.
Just as noted - the 8051 cheats and default to compute 8-bit expressions in 8-bit resolution just because it only has 8-bit native arithmetic operations. And a compiler flag to follow the standard of using at least sizeof(int) resolution.