Hello Community! First time poster here :)
I am debugging some code and am completely perplexed by a comparison between and integer and a float that sometimes evaluates true.
In the following code, a floating-point calculation is performed, typecast to a unsigned integer, which is then compared to another float. I expect that in all but the zero case, this evaluation would fail. And usually it does. But sometimes it evaluates true, and I am trying to understand what conditions lead to this.
float gA = 0.00356270161773046;
float gB = 0.00336178532868241;
[...]
float sA = 0.5 / gA;
float sB = 0.5 / gB;
const float PA = sA * gA;
if(sB == (uint16_t)(PA / gB)) // Evaluates true.
if(sB == (uint16_t)(PA / gB))
// Evaluates true.
In the above code, gA and gB are similar but different values, set elsewhere in the snipped portion, and by varying them slightly I can change the result of the evaluation. For example:
if gB = 0.00336178532868241 -> TRUE
if gB = 0.00332519446796205 -> FALSE
But I don't understand why it is ever evaluating true..!
This code was written by someone else - who is much more experienced than me - and I have little idea what they are trying to achieve through this typecast to uint16_t. Can anyone recognise what is being done here? Does this cast operation have the same (converting/rounding) function as it does in gcc etc, or is it a true typecast?
uint16_t
This code is compiled via C51 for a C8051 microcontroller application, in case that is relevant. I cannot debug or rebuild the code.
Thank you!
Hi Andy,
Unfortunately, someone who left the company. I guess I could reach them if necessary, but let's say I'm trying to understand it myself :)
Similarly, I imagine I could get a compiler and try rebuilding it and reproduce on a dev board. Seems like a lot of steps, but perhaps it's what I need to do.