Hello,
We came across a mystery/bug yesterday. This was tested with C51 v9.53 (Simplicity Studio) and 9.56 (uVision).
unsigned char data c; unsigned int data d; // The values 0x7B and 0xFF7B are wrong c = (200 - 50) * (255 - 0) / (255 - 50); 0000738d: MOV 65H, #7BH d = (200 - 50) * (255 - 0) / (255 - 50); 00007390: MOV 66H, #0FFH 00007393: MOV 67H, #7BH //These are correct c = (200 - 50) * (255u - 0) / (255 - 50); 0000738d: MOV 65H, #0BAH d = (200 - 50) * (255u - 0) / (255 - 50); 00007390: MOV 66H, #00H 00007393: MOV 67H, #0BAH
The uVision docs say that numeric constants default to 16 bits. Is this not true? Or is this an issue with the "Constant Folding" optimizing step?
Any insights appreciated, Darren
For OP -
You were correct that 3000000 would be considered a long int (32-bit signed value). My guess on the SYSCLK would make no difference to this. Any comments based on SYSCLK "helping" to upconvert the 3000000 should be ignored. Hans-Bernhard Broekers specific example of when the conversions to unsigned int happen are correct.
//working - all 16-bit calculations, signed on 2 of the inner calculations, unsigned on the other inner calculation, unsigned on all 3 of the "outer" most calculations. ((unsigned)(200 - 50) * (255u - (unsigned)0)) / (unsigned)(255 - 50) //original - all 16-bit signed calculations (unsigned) ((200 - 50) * (255 - 0)) / (255 - 50))
Where a signed constant such as -1 will always be signed, calculations that use this constant can do conversions on it.
unsigned x = 255u * -1; // 65281 - but you already know this can happen. unsigned x = 255u * ((unsigned) -1);