Hello,
We came across a mystery/bug yesterday. This was tested with C51 v9.53 (Simplicity Studio) and 9.56 (uVision).
unsigned char data c; unsigned int data d; // The values 0x7B and 0xFF7B are wrong c = (200 - 50) * (255 - 0) / (255 - 50); 0000738d: MOV 65H, #7BH d = (200 - 50) * (255 - 0) / (255 - 50); 00007390: MOV 66H, #0FFH 00007393: MOV 67H, #7BH //These are correct c = (200 - 50) * (255u - 0) / (255 - 50); 0000738d: MOV 65H, #0BAH d = (200 - 50) * (255u - 0) / (255 - 50); 00007390: MOV 66H, #00H 00007393: MOV 67H, #0BAH
The uVision docs say that numeric constants default to 16 bits. Is this not true? Or is this an issue with the "Constant Folding" optimizing step?
Any insights appreciated, Darren
d = 38250 / 205;
In the above example, the constant 38250 was upgraded to unsigned int. My guess is unsigned int is 16-bits on your machine. The compiler decides the size of the calculations based on the individual constants being used. It is happy to upgrade to size unsigned int, but will not go higher than that unless something is specifically a "higher" type so no, it was not upgraded to a 32-bit value.
ADC0CF = (((SYSCLK/2)/3000000)-1)<<3;
Above my guess is that SYSCLK is defined to be larger than unsigned int, so the compiler is also happy to upgrade the constant 3000000 to the size of SYSCLK. If SYSCLK was of type unsigned int the compiler would have probably complained. (Try replacing SYSCLK with 38250 and see if it complains about 3000000 being too large / truncation happening)
d = (200 - 50) * (255 - 0) / (255 - 50);
All of the constants fit very nicely in a 16-bit signed integer so the compiler does not have a valid reason to "upgrade" to a 16-bit unsigned integer.
d = (200 - 50) * (255u - 0) / (255 - 50);
In this case you specify to the compiler a type "higher" than integer (unsigned integer) so all are converted to this.
d = (200 - 50) * (255u - 0) / (255 - 50 - (-1));
In this case, the compiler would expect to upgrade all constants to unsigned int, but the -1 will cause a "problem" as it does not fit within the range of an unsigned int.
In the above example, the constant 38250 was upgraded to unsigned int.
No. Or rather: if it had been, that would have constituted a compiler bug. For a 16-bit target, the constant 38250 is of type (signed) long int. Which is at least 32 bit wide, but could be wider.
My guess is unsigned int is 16-bits on your machine.
No need for a guess: it's C51, so yes, int is 16 bits wide.
No. Only one of the constants in that expression will be converted to unsigned: the 0 in (255u - 0). The other 4 stay as signed ints. Then the results of their subtractions get converted to unsigned. I.e. what actually happens is equivalent to:
((unsigned)(200 - 50) * (255u - (unsigned)0)) / (unsigned)(255 - 50)
but the -1 will cause a "problem" as it does not fit within the range of an unsigned int.
No it won't, because that -1 is never promoted to unsigned.
For OP -
You were correct that 3000000 would be considered a long int (32-bit signed value). My guess on the SYSCLK would make no difference to this. Any comments based on SYSCLK "helping" to upconvert the 3000000 should be ignored. Hans-Bernhard Broekers specific example of when the conversions to unsigned int happen are correct.
//working - all 16-bit calculations, signed on 2 of the inner calculations, unsigned on the other inner calculation, unsigned on all 3 of the "outer" most calculations. ((unsigned)(200 - 50) * (255u - (unsigned)0)) / (unsigned)(255 - 50) //original - all 16-bit signed calculations (unsigned) ((200 - 50) * (255 - 0)) / (255 - 50))
Where a signed constant such as -1 will always be signed, calculations that use this constant can do conversions on it.
unsigned x = 255u * -1; // 65281 - but you already know this can happen. unsigned x = 255u * ((unsigned) -1);