This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

ADUC timing loops,plz help

Hi there,
I have a development board around the aduc832BS processor and use Keil
uVision2 ver2.40 for software development.
I tried to implement the 1wire routines for DS18B20 (which i conected at
P3.4) but it seems that i have a timing problem.it all starts from the delay function.I use PLLCON=0x051(fcore=8.388608mhz)
below is an example of timing delay for DS5000 (8051 compatible) microcontroller with an 11.059MHz clock.
//Calling the routine takes about 24us,and
//then each count takes another 16us.

void delay(int useconds)
{
int s;
for (s=0; s<useconds;s++);
}
I also know that ADuC832 Machine Cycle Time is nominally 12/Core_CLK.
Could you tell me please how can i calculate the timings for the DELAY() (the Machine Cycles required for them) based on the aduc832. (ex. calling time for the delay() routine,calling time for each count in this routine).

For the DS5000 the reset routine is:
(Reset is 480us, so delay value is (480-24)/16 = 28.5 - we use 29.Presence checked another 70us later, so delay is (70-24)/16 = 2.875 - we use 3).
unsigned char ow_reset(void)
{
unsigned char presence;
DQ = 0; //pull DQ line low
delay(29); // leave it low for 480us
DQ = 1; // allow line to return high
delay(3); // wait for presence
presence = DQ; // get presence signal
delay(25); // wait for end of timeslot
return(presence); // presence signal returned } // 0=presence, 1 = no part

Thank you in advance for helping me out.

  • void delay(int useconds)

    You can't unless you use a faster uC.

    it is impossible to make a delay routine with that resolution, the best resolution you can get is the time for a djnz (2 uS for a "standard" 51) if you use char useconds.

    With int useconds, i guess the best resolution will be on the 10 instruction (NOT clock) cycle level.

    C is unusable for delay routines, you must use assembler or risk the timing to change any time.

    I do not use the AD uCs, but in the rear of my DNA computer is stored that it is a 6 clocker.

    Erik

  • The first and foremost problem is that you're trying to write that delay() routine in C. Don't --- it'll fail to work. Routines that need precise timing at this level are one of the few things that you absolutely have to do in assembler, even on '51.

    It's likely a mistake to even be trying to do this as a subroutine --- call overhead is likely to disrupt the timing noticeably.

    So: look up the data sheet for actual instruction timing. Program your delay in assembly language, then calculate how long it should take, using nothing but the data sheet and your hardware configuration data (XTAL, clock dividers, ...).

    For a cross-check, run it in the simulator and subtract the times spent between two breakpoints surrounding the delay.

  • For a cross-check, run it in the simulator and subtract the times spent between two breakpoints surrounding the delay.
    If you do not have a scope, this is probably the best way you have. However, if you have a scope, set a port bit before the (call to) delay and clear it after. That way you will catch even wrong premises (clock speed, slock/inst cycle cvrt etc)

    Erik

  • "The first and foremost problem is that you're trying to write that delay() routine in C. Don't --- it'll fail to work."

    Of course it will!
    It is essential for a delay routine that you know precisely what instructions will be executed so that you can calculate the time it'll take. But the whole point of a High-Level Language like 'C' is that you do not know precisely what machine instructions the compiler will emit!

    Thus, as Erik & Hans-Bernhard have pointed out, trying to code a precise delay routine in 'C' is futile.

    "It's likely a mistake to even be trying to do this as a subroutine --- call overhead is likely to disrupt the timing noticeably."

    Not necessarily - you can include the overhead in the delay calculation; see my post of 11/17/03 15:26:56 in this thread:
    http://www.keil.com/forum/docs/thread2938.asp

    Note, however, that I did the above on a 4 cylces/instruction chip at 24MHz - so the overhead will be more significant on an 11MHz 12-clocker...

  • I've seen your point regarding the keil c++ delay routines.
    What i can't understand is why Maxim dallas has presented the routines for ds1820 written in C.
    here it is:
    http://pdfserv.maxim-ic.com/en/an/app162.pdf
    After all the Ds500 is pretty much the same as the ADUC832 proc(same 8051 architecture,16mhz clock).
    Here i found the aduc832 instruction set+osc periods:
    http://www.analog.com/UploadedFiles/REDESIGN_Quick_Reference_Guides/1015540832qref0.pdf
    I wonder,is really impossible to estimate the calling time for this delay() and for each count?How did the maxim_team manage
    to estimate for the ds5000 ?
    void delay(int useconds)//24us for calling
    {
    int s;
    for (s=0; s<useconds;s++);//16 for each count
    }

    Regards,
    Robert

  • 've seen your point regarding the keil c++ delay routines.
    What i can't understand is why Maxim dallas has presented the routines for ds1820 written in C.
    here it is:
    http://pdfserv.maxim-ic.com/en/an/app162.pdf
    After all the Ds500 is pretty much the same as the ADUC832 proc(same 8051 architecture,16mhz clock).
    Here i found the aduc832 instruction set+osc periods:
    http://www.analog.com/UploadedFiles/REDESIGN_Quick_Reference_Guides/1015540832qref0.pdf
    I wonder,is really impossible to estimate the calling time for this delay() and for each count?How did the maxim_team manage
    to estimate for the ds5000 ?
    void delay(int useconds)//24us for calling
    {
    int s;
    for (s=0; s<useconds;s++);//16 for each count
    }

    Regards,
    Robert

  • I've seen your point regarding the keil c++ delay routines.
    What i can't understand is why Maxim dallas has presented the routines for ds1820 written in C.
    here it is:
    http://pdfserv.maxim-ic.com/en/an/app162.pdf
    After all the Ds500 is pretty much the same as the ADUC832 proc(same 8051 architecture,16mhz clock).
    Here i found the aduc832 instruction set+osc periods:
    http://www.analog.com/UploadedFiles/REDESIGN_Quick_Reference_Guides/1015540832qref0.pdf
    I wonder,is really impossible to estimate the calling time for this delay() and for each count?How did the maxim_team manage
    to estimate for the ds5000 ?
    void delay(int useconds)//24us for calling
    {
    int s;
    for (s=0; s<useconds;s++);//16 for each count
    }

    Regards,
    Robert

  • "What i can't understand is why Maxim dallas has presented the routines for ds1820 written in C. here it is: http://pdfserv.maxim-ic.com/en/an/app162.pdf "

    You're right - it is very naughty of them to present that code like that!

    They haven't specified which Memory Model to use - and that will significantly affect the timings!!

    Also, they haven't specified a compiler nor version; as mentioned above, there is absolutely no guarantee whatsoever that different compilers - or even different versions of the same compiler - won't generate different code for this function.

    Thus, they are wrong to state specific timings without giving these necessary details!

    Aside from the above, the function is also misleading in naming its input parameter "useconds" as this kind of implies that a value of, say, 23 will give a delay of 23us - when clearly it doesn't(as their own examples show!!)

  • "I wonder,is really impossible to estimate the calling time for this delay() and for each count?"

    Nobody said it was impossible.

    But the only way to do it is to know the precise machine instructions - and you cannot guarantee precisely what machine instructions any compiler will generate.

    Sure, you can compile it and then examine the generated code and compute the timings for that particular compilation; but you have no guarantee whatever that you will get exactly the same code generated next time - especially if you use a different compiler and/or version, and/or fiddle with the compiler options.

  • Wow! a datasheet or app note with an error in it. I'm stunned. ( After picking myself up off the ground and recovering my composure) Virtually every IC datasheet out there has some error or omission of pertinent information. The apps people probably wrote some code then used their scope to measure the delay and declared victory. The first question that comes to my mind is how much accuracy is really required by the peripheral? If those numbers are minimums can you get away with delay which is loosely controlled and therefore excessive but nevertheless adequate?

  • "The first question that comes to my mind is how much accuracy is really required by the peripheral?"

    The timing limits are described in the 1-Wire specs, and in the datasheet for each 1-Wire device.

    I haven't tried it, but I stronlgy suspect that switching from the Small to Large memory model (or vice-versa) would be enough to break the timings if the 'C' code from the app note were relied upon...