This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

KEIL versus IAR floating point math...

I've been trying to figure out why the below math benchmark code appears to run about twice as fast on the same eval board, depending on whether I use KEIL or IAR tools to build the project. The pulse on LED1 is about 6 usec's with the KEIL tools, while it's less than 3 usec's with the IAR tools.

Basically, my code temporarily disables interrupts, drives an I/O pin high, does a math operation, and then drives the I/O pin low again. The function that does this is called repeatedly so that triggering on the pulse with an oscilloscope gives a pretty good indication of the chip+tools math performance.

EX:

float f1,f2,f3;

f1 = (float)rand()/((float)rand()+1.0);
f2 = (float)rand()/((float)rand()+1.0);

AIC_DCR = 0x00000001;
PIOA_SODR = LED1;
f3 = f1 / f2;
PIOA_CODR = LED1;
AIC_DCR = 0x00000003;

Can anyone tell me whether they've looked into which toolset does floating point math faster, and why the code generated with the KEIL tools seems to only run about 1/2 as fast as the same code generated with IAR tools?

Can anyone give me a suggestion for what I could do (software changes only) to speed up the math on the KEIL generated code?

Parents
  • Oh, one added note:
    When I had the opportunity to do a "real benchmark" the difference between the $1000+ toolsets was not big enough to choose one over the other for the code compactness (+-15%) or execution speed (+-20%). One exception: Tasking stated "you do not need to try our tools, they are the best" so they did not get evaluated, just dropped. None of the sub $1000 toolsets came even close. So, the choice should, in my opinion be based on 3 things: 1) and most important is the support any good? 2) do you like it and 3) does it support all uCs in the thingy.

    I think that all toolsets have strangths and weaknesses and, for that reason, a benchmark say nothing (does not avarage these out) creating the job does.

    Erik

Reply
  • Oh, one added note:
    When I had the opportunity to do a "real benchmark" the difference between the $1000+ toolsets was not big enough to choose one over the other for the code compactness (+-15%) or execution speed (+-20%). One exception: Tasking stated "you do not need to try our tools, they are the best" so they did not get evaluated, just dropped. None of the sub $1000 toolsets came even close. So, the choice should, in my opinion be based on 3 things: 1) and most important is the support any good? 2) do you like it and 3) does it support all uCs in the thingy.

    I think that all toolsets have strangths and weaknesses and, for that reason, a benchmark say nothing (does not avarage these out) creating the job does.

    Erik

Children
No data