We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hi,
I am performing simulation on ARM7, LPC2368 micrcontroller using Keil IDE UV4. I had written a delay function using timer. This works perfectly on hardware. When i try to simulate it using simulator it is taking more time than expected.
Say if i make the timer to run for 500mSec then it runs for 4 sec on real time. So every time it encounters a delay function it is taking more time to execute.
How do i solve this timing issue.
TIA.
Now i had carefully found that when i call the delay function for the first time, the Timer Count is starting from 0 and stops when it matches with the Match Register value.
Now when i call the delay function next time, the Timer count starts from its last value instead of counting from 0. Thats why it is taking more time to count.
Here is my code.
void delayMs(DWORD delayInMs) { PCONP |=0x00400000; T2TCR = 0x02; /* reset timer */ T2PR = 0x00; /* set prescaler to zero */ T2MR0 = delayInMs * (Fpclk / 1000-1); T2IR = 0xff; /* reset all interrrupts */ T2MCR = 0x04; /* stop timer on match */ T2TCR = 0x01; /* start timer */
/* wait until delay time has elapsed */ while (T2TCR & 0x01);
return; }
Is there any problem with my code?
1) Why are you doing a full init of the timer for every delay call? If already initialized, it's enough to start it when you need it, adding less additional clock cycles to the delay.
2) Why don't you set the flag to auto-clear the counter on match? Then it will be zero for the next time you need it.
3) Why not consider using the timer free-running all the time? Just capture the current timer value as t0 and then constantly poll the timer value and break when "(current-t0) >= required delay".
Note that a free-running timer can allow other parts of the code to measure delays too from the same timer while you are busy doing something else.
So:
t0 = t1 = t2 = t3 = timer-value; for (;;) { do_something() do_something_else(); now = timer-value; if ((now-t0) >= delay1) { do_special_1(); t0 = now; } if ((now-t1) >= delay2) { do_special_2(); t1 = now; } if ((now-t2) >= delay3) { do_special_3(); t2 = now; short_delay(); do_special_3a(); short_delay(); do_special_3b(); } ... }
This is very useful if you have a 32-bit timer. If it ticks with 1MHz, then it can time with 1us resolution but span 4000 seconds. Which mean you can do short delays of 50 us as direct calls to a delay function. But at the same time do something every 15ms and something else every 470ms and something else every 1.23 seconds - and concurrently measured by the same timer.
per,
what do you do when the timer overflows? what happens if the start value of the timer happens to be just before it would overflow?
Doesn't matter...
He's doing a delta measurement, if the size of the unsigned variable is the same size as the unsigned counter (uint16_t or uint32_t) then the unsigned arithmetic will hide the rollover. The point to worry is where the delay exceeds the rollover period of the timer, in which case you'd be smart enough to decimate the delays so as not to encounter that issue. ie if the counter rolls every 10 seconds, you don't ask to wait 11, you ask to wait 1 second 11 times.
For those more inquisitive than assuming in nature, check out the ANSI standards.
A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.
This is taken from C99.
It is safe to do what was illustrated above. But it's one of those things that can be easily forgotten for those who don't come across such methods very often, so a few explanatory comments would be advisable in real-life source code.
ralph,
thanks for the head up.