We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Why does Keil's C51 preprocessor work fine with:
#define MULTPLR (unsigned char)((float)9.114584e-5 * (float)18432000 / (float)12)
...but consistently give the wrong result with:
#define MULTPLR (unsigned char)((float)9.114584e-5 * (float)18432000 / (float)6)
???
The preprocessor doesn't compute the numbers - it just performs any copy/paste replacements and leaves it to the compiler.
But maybe you should tell us what happens, and what you did expect to happen?
Anyway - the first expression gives the value 140,00001024 which should be truncated to 140. This fits into an 8-bit unsigned integer.
The second expression gives the value 280,00002048 which is too large for your unsigned char. The allowed range is only 0 to 255. 280 decimal is 0x0118. If you only keep the least significant 8 bits, you get 0x18 or 24 in decimal.
Always use pen and paper or pocket calculator or similar and test things logically step-by-step. There is normally always a logical explanation to things.
Oops!
Thanks Per. Yes, the compiler was correctly unable to squeeze the value of 280 decimal into the unsigned char. The problem is with me!
Thank you very much for your invaluable, detatched perspective. It's a silly error that I should have spotted immediately, but I needed someone else to uncover it for me.
Think I'll give it a rest for a bit today...
You may find it more convenient to write 9.114584e-5f * 18432000.0f / 12.0f instead of (float)9.114584e-5 * (float)18432000 / (float)12
The final question is if the compiler should compute with float or use double precision before converting to the unsigned char assignment. The danger with floating point is that the lack of numeric precision may result in funny things.
The final question is if the compiler should compute with float or use double precision before converting to the unsigned char assignment.
My understanding is that the compiler is not *required* to perform the calculation at greater precision than float, but neither is it restricted to using double or long double if it decides that it will use greater precision than float.
Yes, the compiler has some options.
In this case, the compiler can only generate code with float precision, since the C RTL only supports single precision. But just because the compiler can't generate code that uses double precision doesn't forbids it from supporting additional precision when computing compile-time constants.
Typecasting everything to float can affect the precision the expression is computed with, compared to just adding a decimal point at the end of the numbers.