This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Float - uint comparison sometimes succeeding?

Hello Community! First time poster here :)

I am debugging some code and am completely perplexed by a comparison between and integer and a float that sometimes evaluates true.

In the following code, a floating-point calculation is performed, typecast to a unsigned integer, which is then compared to another float. I expect that in all but the zero case, this evaluation would fail. And usually it does. But sometimes it evaluates true, and I am trying to understand what conditions lead to this.

float gA = 0.00356270161773046;

float gB = 0.00336178532868241;

[...]

float sA = 0.5 / gA;

float sB = 0.5 / gB;

[...]

const float PA = sA * gA;

if(sB == (uint16_t)(PA / gB))
    // Evaluates true.

In the above code, gA and gB are similar but different values, set elsewhere in the snipped portion, and by varying them slightly I can change the result of the evaluation. For example:

if gB = 0.00336178532868241 -> TRUE

if gB = 0.00332519446796205 -> FALSE

But I don't understand why it is ever evaluating true..!

This code was written by someone else - who is much more experienced than me - and I have little idea what they are trying to achieve through this typecast to uint16_t. Can anyone recognise what is being done here? Does this cast operation have the same (converting/rounding) function as it does in gcc etc, or is it a true typecast?

This code is compiled via C51 for a C8051 microcontroller application, in case that is relevant. I cannot debug or rebuild the code.

Thank you!