This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

C program Time delay help!

Dear sirs and madams

I present the following program to which i am simulating on a 8051 MCU

#include <reg51.h>

#include <stdio.h>
#define output P0

void main ()
{ unsigned int x;

while(1) { output = 1;

for(x=0; x<65000; x++) {

}

output = 0; for(x=0; x<65000; x++) {

}

}

My problem is that i am attempting to learn timing delays. For this program the P0 goes from 0 to 1 with a delay between. When i debug the program and run it with peripheral P0 open, it switches the value from 0 to 1 BUT the problem is its going to fast no matter how much delay i am putting in.

Please help me

Parents
  • But that isn't really any delay loops. You normally don't need an exact delay but a minimum delay

    You're playing with words. A minimum delay is still a delay.

    There is normally a nop intrinsic that the compiler knows it must not remove, even when running at high optimization levels.

    <devils advocate>
    So, all the talk in all those threads saying software delays are bad because they get optimized out is bunk.
    </devils advocate>

    And, while I'm at it, simulators can be very useful for checking software delays, especially on something like an older '51 core. Not for real time deduction of the delay, but for viewing cycle times and determining the absolute (minimum) delay at a given frequency.

    I prefer to use software delays when they are the most appropriate way to achieve a goal.

Reply
  • But that isn't really any delay loops. You normally don't need an exact delay but a minimum delay

    You're playing with words. A minimum delay is still a delay.

    There is normally a nop intrinsic that the compiler knows it must not remove, even when running at high optimization levels.

    <devils advocate>
    So, all the talk in all those threads saying software delays are bad because they get optimized out is bunk.
    </devils advocate>

    And, while I'm at it, simulators can be very useful for checking software delays, especially on something like an older '51 core. Not for real time deduction of the delay, but for viewing cycle times and determining the absolute (minimum) delay at a given frequency.

    I prefer to use software delays when they are the most appropriate way to achieve a goal.

Children
  • Playing with words? I'm not the one who felt a need to write "fool" indicating that your main interest wasn't to push the debate forward.

    And it is a play with words if you think my text tried to explain the use of nop for wait state generation as "not delay".

    Do your devil's advocate if you like - I have brought up the use of the nop intrinsic in a number of threads without the need for anyone to first jump in with any "fool" statement.

    A problem with software delays in a high-level language is that even when not optimized away, you can still suffer large variation in minimum time as cascaded result of how the compiler/linker operates. With C51, much of the optimization happens in the linker, which is quite unique, and the linker may split or merge code when it sees identical sequences of code in other functions even from other source modules. It may also switch between memory regions for variables.

    With most other compilers, you normally only need to perform a manual validation if you modify the delay code, change compilation options, change memory speed settings, changes core speed or changes compiler version.

    With C51, you also have to be prepared to verify that delay after every code change anywhere in the program, because of how the linker operates. And because of changes to register/variable mappings/reuse as a result of the call tree analysis performed. Most processors uses a real stack, while C51 plays with multiple memory regions with different access methods.

    But all this is still irrelevant, since you activated Don Quixote mode by reading "always prefer" as meaning "only acceptable". And the thread can't be constructive until you decide to leave Don Quixote mode.

  • Playing with words? I'm not the one who felt a need to write "fool" indicating that your main interest wasn't to push the debate forward.

    And it is a play with words if you think my text tried to explain the use of nop for wait state generation as "not delay".

    Do your devil's advocate if you like - I have brought up the use of the nop intrinsic in a number of threads without the need for anyone to first jump in with any "fool" statement.

    A problem with software delays in a high-level language is that even when not optimized away, you can still suffer large variation in minimum time as cascaded result of how the compiler/linker operates. With C51, much of the optimization happens in the linker, which is quite unique, and the linker may split or merge code when it sees identical sequences of code in other functions even from other source modules. It may also switch between memory regions for variables.

    With most other compilers, you normally only need to perform a manual validation if you modify the delay code, change compilation options, change memory speed settings, changes core speed or changes compiler version.

    With C51, you also have to be prepared to verify that delay after every code change anywhere in the program, because of how the linker operates. And because of changes to register/variable mappings/reuse as a result of the call tree analysis performed. Most processors uses a real stack, while C51 plays with multiple memory regions with different access methods.

    But all this is still irrelevant, since you activated Don Quixote mode by reading "always prefer" as meaning "only acceptable". And the thread can't be constructive until you decide to leave Don Quixote mode.

    ???