Hi, I have this code:
void main(void) { data unsigned char a = 0; data unsigned char b = 0; data unsigned int c = 1000;
b = c;
while(1) { a+= 1; } }
And it compiles with no errors or warnings. Why is this? Why does it not raise an error because the 16-bit value c is being stored in the 8-bit value b? The Intpromote is turned off, so that is not it.
Yours, confused.
Robbie Martin.
Why should it?
If this was considered an error, how would a compiler be able to use the normal fgetc() etc to read characters from a file and assign the value to a character? fgetc() et al returns an int, not a char.
The compiler could potentially note that the size of the destination is too small, and issue a warning about the loss of significant bits of data. However, the standard does say in 5.1.2.3, point 10:
"EXAMPLE 2: In executing the fragment char ch1, ch2; /* ... */ c1 = c1 + c2; the 'integer promotions' require that the abstract machine promote the value of each variable to int size and then add the two ints and truncate the sum. [...]"
How fun would it be to work with characters if the compiler on one hand is required to convert characters to int, and on the other and would always require a typecast (just to acknowledge that it is performing the truncation the standard requires) before the assign, to suppress a warning?
Having a compiler that may deviate from the standard and not promote characters to int is a trade-off to reduce the code size in the tiny C51 processor. But the tradeoff has to be as compatible as possible, since you want the code to behave similarly as long as an overflow doesn't occur. Requiring a typecast when using the C51 compiler, but no typecast when using a standards-compliant compiler would be quite strange, don't you think?
Because there is no requirement for it to do so!
See: http://www.keil.com/forum/docs/thread11903.asp If you want a strongly-typed "nanny" of a language that requires the compiler to hold your hand at all times, and keep all sharp objects out of your reach - then 'C' is not the one for you!
The C language allows automatic conversion between different integer widths for programmer convenience. If the language didn't convert, you'd have to cast in every expression with different sorts of integers. This feature is also, as you note, occasionally a programmer inconvenience as well.
PC-Lint, from Gimpel Software, will issue warnings for potential loss of precision in cases like this. Well worth the negligible price.
Thank you for the comments that have appeared. I had to ask this question on behalf of a team that I support (really), and the answers have given me a better insight into the C standard. The question was raised because we have a static code analyser which did raise a warning, whereas the compiler didn't.
I will pass these responses back to the engineer who raised the original query.
Robbie.
"we have a static code analyser which did raise a warning, whereas the compiler didn't"
Yes - that is precisely the reason why you have a static code analyser!
This is why Keil pushes PC-Lint on their website:
http://www.keil.com/pclint/
Keil has pre-made PC-Lint configuration files for all of their compilers.
This 'what if' case is a perfect role for code analysis software, not a compiler.