This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Execution profiling vs. performance analyzer mismatch

Hi all,

i am working on an ARM 7 Project, using it as a prewarper for Hysteresis cancellation. In my c code there is an ISR hooked to an automatic reload Timer. The ISR reads the values from the DAC, performs some calculation, and outputs with a constant sampling rate. The main method is an infinite loop with negligibly few instructions.

When i use the uvision feature "Performance Analyzer" it tells me that nearly half of the time is spent in the main method, accounting for eg. 500ms (depends on total runtime).

Suprised by this i looked into execution profiling, and the sum of times for statements in the main loop is considerably smaller, eg. 20 micro-s , note that the difference is more than 4 orders of magnitude.

any ideas on this?

Parents
  • yes, i figured that all the time not spent in the ISR will be spent in the main loop, but somehow the displays of how much time it is cannot be trusted, at least one of them.

    the problem is, i can set my model complexity at compile time, and i want it to be as high as possible, while just not jamming up the ISR so much, that it will not be done before the next timer event...

Reply
  • yes, i figured that all the time not spent in the ISR will be spent in the main loop, but somehow the displays of how much time it is cannot be trusted, at least one of them.

    the problem is, i can set my model complexity at compile time, and i want it to be as high as possible, while just not jamming up the ISR so much, that it will not be done before the next timer event...

Children
No data