We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
OK, so I am hoping someone can help me with this one. I use the simulator in uVision (v3.21) a huge amount to prove our system, but I have been having an issue with the CPU clock rate.
I use a lot of timers and capture compare. All timers work perfectly and interrupt or change pin states exactly at the right time periods or rates. So, that part of the simulator works mint.
BUT, it seems that the CPU instruction clock is really wrong. I always notice that code takes much longer to execute than I think they should. I timed a block of code on the scope which was about as expected, but timing it in the simulator was way wrong. But during, all I/O timing was perfect.
In order to test it I put 100 NOPs in a row and timed their execution. According to the data sheet (ST10), most instructions are single cycle and at 40 MHz, it should take 2.5us for 100 instructions. But, when the simulators clock config window says the CPU clock is running at 40MHz, the NOPs actually take 45us (using the stop watch to time).
By putting my oscillator freq as 180 MHz I can get the right instruction execution rate but all my timers and peripherals are all messed up!
Anybody know anything about this??? Note that "limit to real time speed" is not ticked.
Or that the simulator have made some decision about wait states for code memory access, that makes fast instructions hurt much more from slow flash accesses.
why does it matter. If you want "time for a routine" the cycle count just requres a multiplication.
BTW I am not sure the cycle count is accurate if you use one of the "nonstandard" derivatives.
The Keril simulator was never intended as an "emulator substitute"
Anyhow, if you really want to scope, you can switch to SILabs chips, they have full emulation built in.
some posters have posted that they, for the above reason, develop on a SILabs chip regardless of which chip goes into the final product.
Erik
I would use the cycle counter but I cant seem to find it! This post was originally about if there was a way to set up the simulator to be accurate. Like I said, it is the only one I have ever seen that is not accurate so I figured it was probably a setup thing. There is nothing to suggest that the part I use is not a standard derivative.
Our application uses around 45 interrupts all at very high rates and extremely time critical code. So know what is going on and how long it takes is very important. The part has no in circuit debugging at all, so I rely on the simulator heavily for testing. So far it has served me very well. I always knew about the slow instruction execution but finally decided to do something about it.
Anyways, it seems that nobody has any definitive answers so I will just accept that it is not cycle accurate and get on with it...
The Cycle Count is under Register->System in the simulator. The cycle count and the calculated time is accurate if you configure the crystal frequency correctly.
You cannot simply measure the time because it is a simulator not an emulator. The running time will be based on the speed of your PC running the simulator.
Be sure that you have selected the correct device to simulate and that you have set the simulator crystal frequency correctly.
Bradford
Great. Thanks for letting me know where that was. So, I just did some more checks. The "state" number multiplied by 1/clock freq = time. This is exactly what you would expect in a simulator.
The problem is that the states number increments by heaps for each instruction. For example it seems to be either 18 or 36 states per instruction.
That is not be the case. Simulator time should be based on the number of ticks, not the PC speed. I can repeatedly measure 100ms on the timer between a periodic 100ms timer interrupt. However, it takes much much longer on my slow old laptop to break on timer interrupts than it does on my fast work PC. Both PCs still measure 100ms between the interrupts.
If I had the right number of states per instruction, then everything would time up exactly right.
What determines the number of states added per instruction???
WOO HOO!!! I just made a break through. I know it seems obvious now but here is what I found. The BUSCON0 settings were not correct in the simulator (it had 15 wait states selected). On target hardware they are configured by port 0 settings at power up but the simulator doesn't have default options for these. I will add some code to my debug functions to preset these at the start of simulation.
I noticed instructions now take about 3-5 states, so still not exactly there yet but I am much closer...
As I suggested earlier: "Or that the simulator have made some decision about wait states for code memory access, that makes fast instructions hurt much more from slow flash accesses."
I have it all under control now. The programming manual says how many states are used for most instructions (typically 2). 2 States is called one instruction cycle.
The final catch was to set the EA pin initial state in the simulator to put it in single chip mode. After that it worked as expected.
I am happy now and consider this issue resolved. No problem with the simulator, just user error!