HI I am using LPC2378 microcontroller...running at CCLK=68.57MHz. I want a delay of 1us. Plz tell if this function is correct or not??
#define MICROSECONDS(x) (x*15)
__asm void delayhk(unsigned int a_microseconds) { loop MOVS R1, R0 SUB R0, R0, #1 BNE loop BX LR }
Usage: delay(MICROSECONDS(10));
I have copied this above code from one of the thread...but in that they were using 72Mhz.. If i continue with this code only without changing my settings of system clock it would generate a lag of .291545 usec. I am a student, Plz tell any alternative for the same. Thanks
Infinite loop? Are you saying that you looked for an exact match? Never ever do that when you wait for a timer to reach a specific value. If the processor gets an interrupt or is to slow to start watching before the exact match has passed, you'll get a very long delay. Always use >= or <= or similar to get more robust code.
Whenever playing with hardware or interrupts or multithreaded applications, you must be prepared that multiple events can happen between two tests.
Previously I have used the following code to work with TIMER0:
void timer_init(void) { //Enable and setup timer interrupt, start timer T0MR0 = 17; /* 1msec = 17142 at 17.143 MHz ; 1usec= 17 at 17.143MHz*/ T0MCR = 3; /* Interrupt and Reset on MR0 */ T0TCR = 1; /* Timer0 Enable */ VICVectAddr4 = (unsigned long)T0_IRQHandler;/* Set Interrupt Vector */ VICVectCntl4 = 15; /* use it for Timer0 Interrupt */ VICIntEnable |= (1 << 4); /* Enable Timer0 Interrupt */ }
It was working fine with TIMER0=17142; for a 17.142 MHz PCLK....(68.57 / 4) but when i did 17 for a microsec delay it just hangs...in delay function
Where you planning on having the timer generate an interrupt every 1us? One million interrupts per second? How many clock cycles do the processor have each us? How many clock cycles does a single interrupt take?
Think again.
I was talking about using a free-running timer for short delays.
Interrupt-based delays are better for longer delays - such as when you have an interrupt every 1ms or every 100ms or similar.
Yeah exactly..I was trying to generate the 1usec delay with the TIMER interrupt...But was unable to do...SO I thought about doing that in assembly inline by using the above mentioned code... Plz check if its correct or not
1) Is it meaningful to use an interrupt for a 1us delay, unless the ISR requiers way less than 1us?
2) What happens if your interrupt doesn't turn off the timer but tries to service the 1MHz interrupts as fast as possible? How much CPU time will you have for your main program?
2) Is an interrupt the only way you can use a timer? Don't you think the main program can just poll the timer counters? Already covered a large number of times on this and other forums.
One important thing is that if you feel like experimenting, you should always think first. Write down what you _think_ will happen. Compare with what really happens. Spend some time pondering any differences found. Is there anything you can learn from it? Just experimenting without first spending some time thinking about possible outcomes will not take you very far. Trial-and-error isn't a very fast-converging algorithm. Algorithms that tries to interpolate/extrapolate and then looks at the resulting error term converges so very much faster.
One thing the teachers hopes for when giving out school assignments is that the students will not just push forward to something to turn in, but that they keep their eyes open and tries to learn as much as possible on the route from receiving the assignment until turning in a solution. A student that also documents this route can hope for way better grades.
Sorry to say that...but you are asking too much questions without giving a single answer... Plz try to help if you can ...dont try to confuse me.. Thanks..
I already have told you how you can poll a timer for delays.
I have already told you the difference between a single interrupt, and having the timer produce continuous interrupts. You do realize that an ISR need not just acknowledge an interrupt - it can also turn off the interrupt source?
But for extremely short delays (like 1us, 2us) I would probably create dedicated functions that doesn't take any parameters.
Thats what I am asking help for ...to create dedicated function for 68.57MHz CCLK...and to generate 1 usec delays...with it..
If you need 1us - just concatenate enough instructions until you have consumed enough cycles. You know the CPU frequency of your processor core, so you know the number of nop instructions or similar that you need.
Just notice that a delay based on CPU cycles will always fail when you reconfigure the PLL to run the processor in a different speed. And if you change to an ARM processor with a different core, the instruction timings may change.
But depending on core speed, it is possible to create delays down to us range with a polled timer too. All the code needs is to pick up the current value of the free-running timer, and then enter a loop while waiting for the difference between current and start value to be large enough - the error you get is up to 1 granularity of the timer + maybe two instruction times + entry/exit time of the function. Of course, you can inline short delays to remove any errors from entry/exit code.
If the peripherials runs at 10MHz, then the granularity error from polling the timer will be no more than 0.1us, which might be acceptable. And such a polled delay can span very long times with the same resolution. So you can just as well delay for 60 seconds with similar errors.
The advantage with a free-running timer for delays is that for longer delays (where you can accept slightly worse jitter) you can perform a mini-loop that performs actions (scanning keyboard, processing serial data, ...) while you poll the timer. Just the same as when you wait for an interrupt handler to set a flag that enough time have passed.
But the big question is - why are you still stuck? A bit of own work should have got you past these delay problems already.