We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hi, the following code for STM32F4 (Cortex-M4):
float fZero= 0.; float fInfinity; short sTestPlus, sTestNeg; int main( void){ fInfinity= 1.f/fZero; sTestPlus= fInfinity; fInfinity= -1.f/fZero; sTestNeg= fInfinity; while(1); }
should result in the values sTestPlus= 0x7FFF and sTestNeg= 0x8000. Instead it produces 0xFFFF and 0x0000.
The reason is, that the compiler uses the signed 32bit convert (leading to the correct result 0x7FFFFFFF and 0x80000000), and then just cuts out the lower short, which is really poor.
As I understand, it is no problem with Cortex M4 to do float to signed16 using the VCVT command in a more sophisticated way (specifying 16bit).
Is there a way to get this done?
You still talk in terms of "illegal behaviour" for something the language standard specifically calls "undefined". Your personal views can't just be updated into hard laws for the compiler vendors to follow.