We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
OK, so I am hoping someone can help me with this one. I use the simulator in uVision (v3.21) a huge amount to prove our system, but I have been having an issue with the CPU clock rate.
I use a lot of timers and capture compare. All timers work perfectly and interrupt or change pin states exactly at the right time periods or rates. So, that part of the simulator works mint.
BUT, it seems that the CPU instruction clock is really wrong. I always notice that code takes much longer to execute than I think they should. I timed a block of code on the scope which was about as expected, but timing it in the simulator was way wrong. But during, all I/O timing was perfect.
In order to test it I put 100 NOPs in a row and timed their execution. According to the data sheet (ST10), most instructions are single cycle and at 40 MHz, it should take 2.5us for 100 instructions. But, when the simulators clock config window says the CPU clock is running at 40MHz, the NOPs actually take 45us (using the stop watch to time).
By putting my oscillator freq as 180 MHz I can get the right instruction execution rate but all my timers and peripherals are all messed up!
Anybody know anything about this??? Note that "limit to real time speed" is not ticked.
If you want to 'time' use the cycle count and calculate the time
Erik
I used the stopwatch feature which measures in time. I am not sure where to find the cycle count but it would be useful to confirm how many cycles were being used for each instruction.
My problem is that in the simulator my timing and IO all works fine but the code takes so long to execute it becomes interrupt bound where I know in practice on the target chip this is not the case. Not even nearly. The simulator is almost 10 times too slow to execute instructions!