I want to create delay functions in a separate c file. The code should be as perfect as possible without using internal timer. I read somewhere that 'Calling the routine takes about 22us' Though 22uS may be different for my cpu clock, if possible this factor should also be taken into account.
I am using 89S52 with 24Mhz crystal. I tried following code.
void usdelay(unsigned int us){ while (us--){ _nop_(); //0.5uS single-cycle instruction delay _nop_(); //0.5uS single-cycle instruction delay } } void msdelay(unsigned int ms){ unsigned long tm = 1000*ms; while (tm--){ _nop_(); //0.5uS single-cycle instruction delay _nop_(); //0.5uS single-cycle instruction delay } } void secdelay(unsigned int sec){ unsigned long tm = 1000*sec; while (tm--){ msdelay(1); } }
The problem is that the uS & ms delays are proper (may be - I have not measured them), but it takes very long to finish secdelay with 1 sec timing. Please HELP.
Increasing the number of NOP's will make the relative contribution of function entry and exit smaller. If you don't need fine delay granularity and have some code memory to spare, make it 100 NOP's in a row or so.
A big mistake you do is to ignore the timd needed by the loop. The total loop time id not 0.5+0.5 us.
You are not even in control of the code generated.
Consider making the smallest loop step 10us instead of 1us. And consider making it in an assembler file.
Can you please give some example code for us, ms & secs delay routines?
You have lots of examples on the net. Why ask someone for yet another example. What is wrong with the existing examples?
If you had spent a bit of time you would have already picked up the common warnings about sw-only delays made in an high-level language.
If you have a functon that gives n*10us delay, it would be trivial to get ms or sec delays.
void usdelay(unsigned int us){ while (us--){ _nop_(); //0.5uS single-cycle instruction delay _nop_(); //0.5uS single-cycle instruction delay } }
what about the time the while() takes? look at the disassembly illusions about the tine a C construct takes are always false
if you want precise timing 1) write it in assembler 2) remember the time the call and parameter transfer takes
Erik
when you have realized the above, relalize the other two. again look at the disassembly
Here, for example: www.8052.com/.../162556
Follow the "And here's how" link for a worked example of how to make a C-callable assembler delay routine using Keil:
www.8052.com/.../149030
from the link: Addendum Specifically for a software delay function, you don't want any interrupts to mess-up your timing. You can achieve this by using the DISABLE directive in the original 'C' source file
THAT would, in most cases, be "playing with fire"
But then most sw delays (except the very, very short ones in ns to us range for settle/hold while bit-fiddling) can be described as "playing with fire". And when playing with settle/hold times, the absolute time value normally isn't important. What is important is to have at least x ns/us of delay to make sure the processor isn't too fast for the external electronics.
It is way better to make use of a free-running timer and poll the timer value in the while loop. Then the while loop will auto-adapt to losses from interrupts. And many different delays can be handled using the same timer.
It is also way better to see if theere are an peripherials with a baudrate feature that can be abused. Maybe sending SPI or UART communication out in the air, with each byte sent representing x microseconds. Some chips allows this without even mapping the output from the peripherial onto a physical processor pin.
Having longer sw-only delays means that the processor can do zero other operations while waiting. Any other thing (including servicing interrupts) represents a "leakage" of CPU time, making the delay longer than intended. So having a 1-second delay as a busy-loop that is strictly counting seconds means you have millions of processor instructions wasted while the procesor runs at maximum speed without getting anywhere. Besides being inelegant, it also wastes power. Extra bad if the device is battery-operated.
Others have mentioned the issues / limitations associated with your approach. Achieving very precise software delays under all cases is very difficult, if not impossible. However, achieving reasonably accurate ms level delays is doable.
My approach is similar to yours, except that I built mine on three levels:
1) delay(): this delays a certain cycles. 2) delay_us(): this delays certain times using delay(): 3) delay_ms(): this takes delay_us() and do a ms-level delays
#define DLY_US 100 //every delay_us(DLY_US) gets to 1ms, under 1000000Mhz cpu clock. Adjust this for chip #define F_CPU 2000000ul //cpu speed / ticks void delay_us(unsigned short us) { delay(us); //delay() takes an 8-bit input us = us >> 8; while (us--) delay(0xff); //this minimizes the overhead for short delay loops } void delay_ms(unsigned short ms) { while (ms--) delay_us(DLY_US * F_CPU / 1000000ul); //adjust DLY_US to achieve the right amount of delays in ms }
You can take this approach to achieve precise us level delays but they are not as reliable as ms level delays.
If you wish to achieve anything longer than ms level delays, you should think about other approaches, like a real timer.
A typo in there somewhere, methinks...?!