This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Delay routine problem

I want to create delay functions in a separate c file. The code should be as perfect as possible without using internal timer. I read somewhere that 'Calling the routine takes about 22us' Though 22uS may be different for my cpu clock, if possible this factor should also be taken into account.

I am using 89S52 with 24Mhz crystal. I tried following code.

void usdelay(unsigned int us){
        while (us--){
                _nop_();        //0.5uS single-cycle instruction delay
                _nop_();        //0.5uS single-cycle instruction delay
        }
}

void msdelay(unsigned int ms){
        unsigned long tm = 1000*ms;
        while (tm--){
                _nop_();        //0.5uS single-cycle instruction delay
                _nop_();        //0.5uS single-cycle instruction delay
        }
}


void secdelay(unsigned int sec){
        unsigned long tm = 1000*sec;
        while (tm--){
                msdelay(1);
        }
}

The problem is that the uS & ms delays are proper (may be - I have not measured them), but it takes very long to finish secdelay with 1 sec timing. Please HELP.

  • A big mistake you do is to ignore the timd needed by the loop. The total loop time id not 0.5+0.5 us.

    You are not even in control of the code generated.

    Consider making the smallest loop step 10us instead of 1us. And consider making it in an assembler file.

  • void usdelay(unsigned int us){
            while (us--){
                    _nop_();        //0.5uS single-cycle instruction delay
                    _nop_();        //0.5uS single-cycle instruction delay
            }
    }
    


    what about the time the while() takes?
    look at the disassembly
    illusions about the tine a C construct takes are always false

    if you want precise timing
    1) write it in assembler
    2) remember the time the call and parameter transfer takes

    Erik

  • Others have mentioned the issues / limitations associated with your approach. Achieving very precise software delays under all cases is very difficult, if not impossible. However, achieving reasonably accurate ms level delays is doable.

    My approach is similar to yours, except that I built mine on three levels:

    1) delay(): this delays a certain cycles.
    2) delay_us(): this delays certain times using delay():
    3) delay_ms(): this takes delay_us() and do a ms-level delays

    #define DLY_US  100  //every delay_us(DLY_US) gets to 1ms,  under 1000000Mhz cpu clock. Adjust this for chip
    #define F_CPU  2000000ul //cpu speed / ticks
    void delay_us(unsigned short us) {
      delay(us); //delay() takes an 8-bit input
      us = us >> 8;
      while (us--) delay(0xff); //this minimizes the overhead for short delay loops
    }
    
    void delay_ms(unsigned short ms) {
      while (ms--) delay_us(DLY_US * F_CPU / 1000000ul); //adjust DLY_US to achieve the right amount of delays in ms
    }
    

    You can take this approach to achieve precise us level delays but they are not as reliable as ms level delays.

    If you wish to achieve anything longer than ms level delays, you should think about other approaches, like a real timer.