OK, so I am hoping someone can help me with this one. I use the simulator in uVision (v3.21) a huge amount to prove our system, but I have been having an issue with the CPU clock rate.
I use a lot of timers and capture compare. All timers work perfectly and interrupt or change pin states exactly at the right time periods or rates. So, that part of the simulator works mint.
BUT, it seems that the CPU instruction clock is really wrong. I always notice that code takes much longer to execute than I think they should. I timed a block of code on the scope which was about as expected, but timing it in the simulator was way wrong. But during, all I/O timing was perfect.
In order to test it I put 100 NOPs in a row and timed their execution. According to the data sheet (ST10), most instructions are single cycle and at 40 MHz, it should take 2.5us for 100 instructions. But, when the simulators clock config window says the CPU clock is running at 40MHz, the NOPs actually take 45us (using the stop watch to time).
By putting my oscillator freq as 180 MHz I can get the right instruction execution rate but all my timers and peripherals are all messed up!
Anybody know anything about this??? Note that "limit to real time speed" is not ticked.