Incorrect compile for convert float to short/int16

Hi,
the following code for STM32F4 (Cortex-M4):

float fZero= 0.;
float fInfinity;
short sTestPlus, sTestNeg;
int main( void){
  fInfinity= 1.f/fZero;
  sTestPlus= fInfinity;
  fInfinity= -1.f/fZero;
  sTestNeg= fInfinity;
  while(1);
}

should result in the values sTestPlus= 0x7FFF and sTestNeg= 0x8000. Instead it produces 0xFFFF and 0x0000.

The reason is, that the compiler uses the signed 32bit convert (leading to the correct result 0x7FFFFFFF and 0x80000000), and then just cuts out the lower short, which is really poor.

As I understand, it is no problem with Cortex M4 to do float to signed16 using the VCVT command in a more sophisticated way (specifying 16bit).

Is there a way to get this done?

Parents
  • If I'm reading the spec correctly, the behaviour of this code is undefined:

    When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.

    Anyway, if you want to convert to a signed integer, why don't you use explicit conversion?

    fInfinity = 1.f/fZero;
    sTestPlus = (int)fInfinity;
    fInfinity = -1.f/fZero;
    sTestNeg = (int)fInfinity;
    

Reply
  • If I'm reading the spec correctly, the behaviour of this code is undefined:

    When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.

    Anyway, if you want to convert to a signed integer, why don't you use explicit conversion?

    fInfinity = 1.f/fZero;
    sTestPlus = (int)fInfinity;
    fInfinity = -1.f/fZero;
    sTestNeg = (int)fInfinity;
    

Children
More questions in this forum