We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
I'm using uVision 4.7 to debug my Mbed LPC1768 (Cortex M3). When I run the debugger, everything works fine except the elapse time in seconds is very off. In the register window, "sec" increases about 10 times too fast. I believe this is do to uVision believing the Mbed is running at a slower clock rate than it actually is. uVision uses the number of instructions performed to calculate the time. Does anyone know how to change this clock rate?
Hey Matthais,
Thanks for responding. At first it was set to 12Mhz. My processor runs at 96Mhz, so I changed it to that but it made no difference. Timing remained off by the same degree. Am I suppose to put the actual crystal rate in there? If so, how does uVision know how much the processor divides it by?
The simulator must be able to figure out how much the clock crystal frequency is scaled - how else will it be able to figure out how much it should scale the clock for baudrate, watchdog timer etc? This of course assuming we are talking about a processor where PLL etc has simulation support.
BTW, I'm not simulating. I'm debugging through a CMSIS-DAP via USB. Looking to get an answer said setup.