This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

KEIL versus IAR floating point math...

I've been trying to figure out why the below math benchmark code appears to run about twice as fast on the same eval board, depending on whether I use KEIL or IAR tools to build the project. The pulse on LED1 is about 6 usec's with the KEIL tools, while it's less than 3 usec's with the IAR tools.

Basically, my code temporarily disables interrupts, drives an I/O pin high, does a math operation, and then drives the I/O pin low again. The function that does this is called repeatedly so that triggering on the pulse with an oscilloscope gives a pretty good indication of the chip+tools math performance.

EX:

float f1,f2,f3;

f1 = (float)rand()/((float)rand()+1.0);
f2 = (float)rand()/((float)rand()+1.0);

AIC_DCR = 0x00000001;
PIOA_SODR = LED1;
f3 = f1 / f2;
PIOA_CODR = LED1;
AIC_DCR = 0x00000003;

Can anyone tell me whether they've looked into which toolset does floating point math faster, and why the code generated with the KEIL tools seems to only run about 1/2 as fast as the same code generated with IAR tools?

Can anyone give me a suggestion for what I could do (software changes only) to speed up the math on the KEIL generated code?

0