Hi all,
i am working on an ARM 7 Project, using it as a prewarper for Hysteresis cancellation. In my c code there is an ISR hooked to an automatic reload Timer. The ISR reads the values from the DAC, performs some calculation, and outputs with a constant sampling rate. The main method is an infinite loop with negligibly few instructions.
When i use the uvision feature "Performance Analyzer" it tells me that nearly half of the time is spent in the main method, accounting for eg. 500ms (depends on total runtime).
Suprised by this i looked into execution profiling, and the sum of times for statements in the main loop is considerably smaller, eg. 20 micro-s , note that the difference is more than 4 orders of magnitude.
any ideas on this?
But do you have any sleep or something?
All time not spent in interrupts _will_ be spent in main(). You can then decide if this time should be spent wildly running around a tiny loop, or if you should add a sleep instruction to have your processor to halt until the next interrupt. But even if sleeping, the time spent will be in your main loop :)
yes, i figured that all the time not spent in the ISR will be spent in the main loop, but somehow the displays of how much time it is cannot be trusted, at least one of them.
the problem is, i can set my model complexity at compile time, and i want it to be as high as possible, while just not jamming up the ISR so much, that it will not be done before the next timer event...