Hello Community! First time poster here :)
I am debugging some code and am completely perplexed by a comparison between and integer and a float that sometimes evaluates true.
In the following code, a floating-point calculation is performed, typecast to a unsigned integer, which is then compared to another float. I expect that in all but the zero case, this evaluation would fail. And usually it does. But sometimes it evaluates true, and I am trying to understand what conditions lead to this.
float gA = 0.00356270161773046;
float gB = 0.00336178532868241;
float sA = 0.5 / gA;
float sB = 0.5 / gB;
const float PA = sA * gA;
if(sB == (uint16_t)(PA / gB)) // Evaluates true.
if(sB == (uint16_t)(PA / gB))
// Evaluates true.
In the above code, gA and gB are similar but different values, set elsewhere in the snipped portion, and by varying them slightly I can change the result of the evaluation. For example:
if gB = 0.00336178532868241 -> TRUE
if gB = 0.00332519446796205 -> FALSE
But I don't understand why it is ever evaluating true..!
This code was written by someone else - who is much more experienced than me - and I have little idea what they are trying to achieve through this typecast to uint16_t. Can anyone recognise what is being done here? Does this cast operation have the same (converting/rounding) function as it does in gcc etc, or is it a true typecast?
This code is compiled via C51 for a C8051 microcontroller application, in case that is relevant. I cannot debug or rebuild the code.
Alexandicity said:Does this cast operation have the same (converting/rounding) function as it does in gcc etc, or is it a true typecast?
That question really makes no sense, because it is based on the incorrect assumption that what GCC does were some how not a "true typecast". This incorrect assumption most likely is caused by an incorrect idea of what a "true typecast" actually is.
Also note that this line:
Alexandicity said:float gA = 0.00356270161773046
almost certainly does not do what you think it does. A C51 'float' variable has nowhere near as many significant digits as you try to cram into it here.
Perhaps my terminology about typecasting is off. In my understanding, most casts are a no-op re-interpretation of the bits in a memory location (the "true" typecast as I call it). In GCC, casts between integer and floats are special, in that these actually do a conversion (in this case, dropping the decimal digits, effectively rounding towards zero). I am unsure what the C51 does in this situation (I presume the same as GCC).
For the long digits - sure, I know it's too many to be represented, but I didn't want to make unnecessary edits to the code when presenting it here. In any case, the first digits to change are in the third significant figure, which certainly is within the resolution of a float.
Remember that float values are (often) non-exact, so you should never (?) just use the '==' operator - always test for a range of values ...
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Floating-point equality: It’s worse than you think
Absolutely - I think this is an error in the code. In my opinion, this evaluation should (almost?) always fail because of this - I'm perplexed by the occasional positive evaluations, and I'm trying to understand under what conditions this can happen (beyond the trivial zero case). This doesn't happen on gcc (which I'm more used to), and I'm trying to see if there's a difference in the way that C51 handles this.
Alexandicity said:This code was written by someone else
Are you able to contact her/him?
Or their contemporaries / successors?
Alexandicity said:I cannot debug or rebuild the code
Alexandicity said:Perhaps my terminology about typecasting is off.
Terminology doesn't actually cover it. Just as I guessed before, it's clear that your entire understanding of what a typecast actually is is wrong.
And no, you won't gather such understanding from looking what particular compilers do in particular cases, regardless of whether they're GCC, C51 or something else. The meaning of such things is not defined by implementors, but rather by the definition of the C programming language. Which see.
As to this compare sometimes yielding true: on what basis did you conclude that should never happen? What exactly would prohibit "sB" from having an exact integer value which could compare equal to an uint16_t?
View all questions in Keil forum