This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Preprocessor Confusion?

Why does Keil's C51 preprocessor work fine with:

#define MULTPLR (unsigned char)((float)9.114584e-5 * (float)18432000 / (float)12)

...but consistently give the wrong result with:

#define MULTPLR (unsigned char)((float)9.114584e-5 * (float)18432000 / (float)6)

???

Parents
  • The final question is if the compiler should compute with float or use double precision before converting to the unsigned char assignment.

    My understanding is that the compiler is not *required* to perform the calculation at greater precision than float, but neither is it restricted to using double or long double if it decides that it will use greater precision than float.

Reply
  • The final question is if the compiler should compute with float or use double precision before converting to the unsigned char assignment.

    My understanding is that the compiler is not *required* to perform the calculation at greater precision than float, but neither is it restricted to using double or long double if it decides that it will use greater precision than float.

Children
  • Yes, the compiler has some options.

    In this case, the compiler can only generate code with float precision, since the C RTL only supports single precision. But just because the compiler can't generate code that uses double precision doesn't forbids it from supporting additional precision when computing compile-time constants.

    Typecasting everything to float can affect the precision the expression is computed with, compared to just adding a decimal point at the end of the numbers.