This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Incorrect compile for convert float to short/int16

Hi,
the following code for STM32F4 (Cortex-M4):

float fZero= 0.;
float fInfinity;
short sTestPlus, sTestNeg;
int main( void){
  fInfinity= 1.f/fZero;
  sTestPlus= fInfinity;
  fInfinity= -1.f/fZero;
  sTestNeg= fInfinity;
  while(1);
}

should result in the values sTestPlus= 0x7FFF and sTestNeg= 0x8000. Instead it produces 0xFFFF and 0x0000.

The reason is, that the compiler uses the signed 32bit convert (leading to the correct result 0x7FFFFFFF and 0x80000000), and then just cuts out the lower short, which is really poor.

As I understand, it is no problem with Cortex M4 to do float to signed16 using the VCVT command in a more sophisticated way (specifying 16bit).

Is there a way to get this done?

Parents
  • But conversion from int to float is nothing mystical

    It's also not not what we've been talking about here. That was float-to-int, not int-to-float.

    - why should this be not documented?

    Why should it be? Per the language definition your code causes undefined behaviour. That means the compiler itself can crash, or generate code that returns a different, random number every time, and still be 100% correct, without the need to document anything.

    Yes, ARM could have decided to define a particular behaviour in this cause, and maybe they did. In that case, and only then, they should have documented this decision. But the fact that nobody seems to have found any such documentation would seem to indicate that no such decision was made.

    Summary: this code is buggy. It relies on unwarranted assumptions. For the ARM compiler, these assumptions actually are incorrect, so the bug became noticeable.

Reply
  • But conversion from int to float is nothing mystical

    It's also not not what we've been talking about here. That was float-to-int, not int-to-float.

    - why should this be not documented?

    Why should it be? Per the language definition your code causes undefined behaviour. That means the compiler itself can crash, or generate code that returns a different, random number every time, and still be 100% correct, without the need to document anything.

    Yes, ARM could have decided to define a particular behaviour in this cause, and maybe they did. In that case, and only then, they should have documented this decision. But the fact that nobody seems to have found any such documentation would seem to indicate that no such decision was made.

    Summary: this code is buggy. It relies on unwarranted assumptions. For the ARM compiler, these assumptions actually are incorrect, so the bug became noticeable.

Children
No data