This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Float or long - which is faster?

My manual lists float multiplication as taking 106 cycles in long, but
for float it can be from 13 to 198 (with an average time of 114).

It looks like the two are about equal based on the average, but what
determines the time to do a float calculation?

What guidelines should I follow in trying to chose between the
two types?

Thanks.

Andy

Parents
  • It looks like the two are about equal based on the average, but what determines the time to do a float calculation?

    This is a very loaded question, to fully explain it would require full understanding on the IEEE floating point format. There are many web sights written on this topic, a college with a good math or CS department would probably be a good place to start.

    The quick explanation is that a LONG is easy to do, because almost all processors can handle integer math natively, thus the time is 100% predictable.

    A float is usually not natively handled, and thus you need to actually manipulate bits in the float representation before the processor can perform native math on it.

    What guidelines should I follow in trying to chose between the
    two types?


    Depends on you application. Are your numbers purely integers? Are you willing to float your own point? Would you rather have 100% predictable time, or can you take the hit of unpredictable time.? Will your largest number fit in a long? The list continues.

    Also, keep in mind that many math library functions are inherantly floating point, so even if you use longs to store your data, something like sqrt will actually perform floating point math, and then cast it back to a long.

Reply
  • It looks like the two are about equal based on the average, but what determines the time to do a float calculation?

    This is a very loaded question, to fully explain it would require full understanding on the IEEE floating point format. There are many web sights written on this topic, a college with a good math or CS department would probably be a good place to start.

    The quick explanation is that a LONG is easy to do, because almost all processors can handle integer math natively, thus the time is 100% predictable.

    A float is usually not natively handled, and thus you need to actually manipulate bits in the float representation before the processor can perform native math on it.

    What guidelines should I follow in trying to chose between the
    two types?


    Depends on you application. Are your numbers purely integers? Are you willing to float your own point? Would you rather have 100% predictable time, or can you take the hit of unpredictable time.? Will your largest number fit in a long? The list continues.

    Also, keep in mind that many math library functions are inherantly floating point, so even if you use longs to store your data, something like sqrt will actually perform floating point math, and then cast it back to a long.

Children
No data