Hello Community! First time poster here :)
I am debugging some code and am completely perplexed by a comparison between and integer and a float that sometimes evaluates true.
In the following code, a floating-point calculation is performed, typecast to a unsigned integer, which is then compared to another float. I expect that in all but the zero case, this evaluation would fail. And usually it does. But sometimes it evaluates true, and I am trying to understand what conditions lead to this.
float gA = 0.00356270161773046;
float gB = 0.00336178532868241;
float sA = 0.5 / gA;
float sB = 0.5 / gB;
const float PA = sA * gA;
if(sB == (uint16_t)(PA / gB)) // Evaluates true.
if(sB == (uint16_t)(PA / gB))
// Evaluates true.
In the above code, gA and gB are similar but different values, set elsewhere in the snipped portion, and by varying them slightly I can change the result of the evaluation. For example:
if gB = 0.00336178532868241 -> TRUE
if gB = 0.00332519446796205 -> FALSE
But I don't understand why it is ever evaluating true..!
This code was written by someone else - who is much more experienced than me - and I have little idea what they are trying to achieve through this typecast to uint16_t. Can anyone recognise what is being done here? Does this cast operation have the same (converting/rounding) function as it does in gcc etc, or is it a true typecast?
This code is compiled via C51 for a C8051 microcontroller application, in case that is relevant. I cannot debug or rebuild the code.
Alexandicity said:Does this cast operation have the same (converting/rounding) function as it does in gcc etc, or is it a true typecast?
That question really makes no sense, because it is based on the incorrect assumption that what GCC does were some how not a "true typecast". This incorrect assumption most likely is caused by an incorrect idea of what a "true typecast" actually is.
Also note that this line:
Alexandicity said:float gA = 0.00356270161773046
almost certainly does not do what you think it does. A C51 'float' variable has nowhere near as many significant digits as you try to cram into it here.
Perhaps my terminology about typecasting is off. In my understanding, most casts are a no-op re-interpretation of the bits in a memory location (the "true" typecast as I call it). In GCC, casts between integer and floats are special, in that these actually do a conversion (in this case, dropping the decimal digits, effectively rounding towards zero). I am unsure what the C51 does in this situation (I presume the same as GCC).
For the long digits - sure, I know it's too many to be represented, but I didn't want to make unnecessary edits to the code when presenting it here. In any case, the first digits to change are in the third significant figure, which certainly is within the resolution of a float.
Alexandicity said:Perhaps my terminology about typecasting is off.
Terminology doesn't actually cover it. Just as I guessed before, it's clear that your entire understanding of what a typecast actually is is wrong.
And no, you won't gather such understanding from looking what particular compilers do in particular cases, regardless of whether they're GCC, C51 or something else. The meaning of such things is not defined by implementors, but rather by the definition of the C programming language. Which see.
As to this compare sometimes yielding true: on what basis did you conclude that should never happen? What exactly would prohibit "sB" from having an exact integer value which could compare equal to an uint16_t?
I understand what a typecast is fine, and I know how they are done in C. There are two distinct kinds of action that people commonly call "typecasting" - I assume you have no objection to this statement? I do not know what the "official" names for the two actions are, and also do not particularly care. If you have spotted in error in my description of what is happening, I did not see it in your response.
I ask about the specifics C51 compiler, as, while I expect it to have the same implementation of the typecast as gcc, I cannot be sure. I wanted to get some confirmation from people more familiar with it than I. A difference in compiler behaviour would explain the behaviour I am seeing and while this is pretty unlikely, it's also the only idea I have so far...
You are right, of course, that it is possible that the if the sB did, by chance, have in integer, then it could be evaluated as equal to the (implicitly promoted) uint16_t value. But that's not happening here; the example has two very-much-not integers, and they are still evaluating as equal.
Alexandicity said:I understand what a typecast is fine, and I know how they are done in C.
Everything you've written here so far tells me quite clearly that this claim of yours is wrong.
Alexandicity said:There are two distinct kinds of action that people commonly call "typecasting" - I assume you have no objection to this statement?
Only that it's flat-out wrong.
Alexandicity said:If you have spotted in error in my description of what is happening, I did not see it in your response.
You did not see this
Broeker said:it's clear that your entire understanding of what a typecast actually is is wrong.
View all questions in Keil forum