Hello,
We came across a mystery/bug yesterday. This was tested with C51 v9.53 (Simplicity Studio) and 9.56 (uVision).
unsigned char data c; unsigned int data d; // The values 0x7B and 0xFF7B are wrong c = (200 - 50) * (255 - 0) / (255 - 50); 0000738d: MOV 65H, #7BH d = (200 - 50) * (255 - 0) / (255 - 50); 00007390: MOV 66H, #0FFH 00007393: MOV 67H, #7BH //These are correct c = (200 - 50) * (255u - 0) / (255 - 50); 0000738d: MOV 65H, #0BAH d = (200 - 50) * (255u - 0) / (255 - 50); 00007390: MOV 66H, #00H 00007393: MOV 67H, #0BAH
The uVision docs say that numeric constants default to 16 bits. Is this not true? Or is this an issue with the "Constant Folding" optimizing step?
Any insights appreciated, Darren
I suppose I was expecting that, at compile time, it would catch the signed size issue and upgrade to a 32bit evaluation.
for the '51 an int is 16 bits. 32 bit int would be ridiculous for the '51