This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

How could I make a adjustable delay function in "micro-second" level

Hello Every :

I use AT89C52 and 11.059MHz oscillator.
1 instuction takes about 1 micro-second.(10^-6 sec.)

I write a delay function in *.asm.

It works well , but I want it adjustable.
I want to delay according to the parameter I pass.My goal is to modify delay time for some protocol time slot testing.

I found it some how difficult to achieve.
Because the basic instruction , for example : _nop_ , take 1 micro-second.
I can't add more decision. It will takes more time.

How could I implement such kind of delay?
Please give me a hand , thanks a lot~~

Parents
  • You might try a series of _nops_, and jump far enough down into them so that the remainder represents the time that you need.

    There is still some overhead in this method, of course. You have to calculate the jump, and get into and out of the function. You won't be able to create a 1 us delay, but you might be able to create fairly precise delays from, say, 10 us up to as long of a table as you can stand.

    Longer intervals means you can use a loop, at the cost of more granularity and more overhead.

    Small intervals and precise measurements are best done by timer hardware rather than software. "Small" is relative to your processor clock and architecture. If software has to act at very precise times (send this response exactly 3.4 microseconds after receiving this message), then you need a faster processor, as Andrew suggests. There is an overall system design issue to be considered here, so it's hard to suggest solutions without really knowing the requirements.

Reply
  • You might try a series of _nops_, and jump far enough down into them so that the remainder represents the time that you need.

    There is still some overhead in this method, of course. You have to calculate the jump, and get into and out of the function. You won't be able to create a 1 us delay, but you might be able to create fairly precise delays from, say, 10 us up to as long of a table as you can stand.

    Longer intervals means you can use a loop, at the cost of more granularity and more overhead.

    Small intervals and precise measurements are best done by timer hardware rather than software. "Small" is relative to your processor clock and architecture. If software has to act at very precise times (send this response exactly 3.4 microseconds after receiving this message), then you need a faster processor, as Andrew suggests. There is an overall system design issue to be considered here, so it's hard to suggest solutions without really knowing the requirements.

Children