Dear sirs and madams
I present the following program to which i am simulating on a 8051 MCU
#include <reg51.h>
#include <stdio.h> #define output P0
void main () { unsigned int x;
while(1) { output = 1;
for(x=0; x<65000; x++) {
}
output = 0; for(x=0; x<65000; x++) {
My problem is that i am attempting to learn timing delays. For this program the P0 goes from 0 to 1 with a delay between. When i debug the program and run it with peripheral P0 open, it switches the value from 0 to 1 BUT the problem is its going to fast no matter how much delay i am putting in.
Please help me
Have you tried a loop inside a loop?
int lp1, lp2, lp3; //and so on....depending upon who big the delay you want for(lp1=0; lp1<0xFFFF; lp1++) for(lp2=0; lp2<0xFFFF; lp2++) for(lp3=0; lp3<0xFFFF; lp3++); //Continue looping
I would always prefer using a ON-CHIP TIMER to generate delays.
By the way, the simulator may not give the _exact_ time delay as it may be generated on the actual hardware.
Fool. Try having a nanosecond delay with an on chip timer.
Assembler can be very useful. No risk of a compiler optimizing code away. Seems like it's a forgotten skill.
Playing with words? I'm not the one who felt a need to write "fool" indicating that your main interest wasn't to push the debate forward.
And it is a play with words if you think my text tried to explain the use of nop for wait state generation as "not delay".
Do your devil's advocate if you like - I have brought up the use of the nop intrinsic in a number of threads without the need for anyone to first jump in with any "fool" statement.
A problem with software delays in a high-level language is that even when not optimized away, you can still suffer large variation in minimum time as cascaded result of how the compiler/linker operates. With C51, much of the optimization happens in the linker, which is quite unique, and the linker may split or merge code when it sees identical sequences of code in other functions even from other source modules. It may also switch between memory regions for variables.
With most other compilers, you normally only need to perform a manual validation if you modify the delay code, change compilation options, change memory speed settings, changes core speed or changes compiler version.
With C51, you also have to be prepared to verify that delay after every code change anywhere in the program, because of how the linker operates. And because of changes to register/variable mappings/reuse as a result of the call tree analysis performed. Most processors uses a real stack, while C51 plays with multiple memory regions with different access methods.
But all this is still irrelevant, since you activated Don Quixote mode by reading "always prefer" as meaning "only acceptable". And the thread can't be constructive until you decide to leave Don Quixote mode.
???
" Fool. Try having a nanosecond delay with an on chip timer. Assembler can be very useful."
A nanosecond delay cannot be achieved using 8051 assembler.
In fact, the assembler is not the limiting factor. The speed the CPU runs at and it's maximum speed are far more relevant.
"The speed the CPU runs at and it's maximum speed are far more relevant."
How fast will your 8051's CPU run to achieve a nanosecond delay?
Faster than your brain.
"Faster than your brain."
Uh-huh. We knew you'd fail. Thanks for playing.
The least I can do is offer you a rematch. But you bore me, so I won't.
What an odd discussion.
Looks like the chaff is being thrown out with the wheat.
/We knew you'd fail./
Heh. With "nanosecond delay", his failure was preordained.
Cool. Can anyone join in with the derision?
"nanosecond delay". What an idiot!
"assembly". How 20th century!
"appropriate way to achieve a goal". Be serious!
"Can anyone join in with the derision?"
Sure! The more, the merrier. The jackassery is strong in that one.
Well thanks for the "Fool" compliment. by this time everyone knows 'who is who'.
I think that is an incorrect statement.
"I think that is an incorrect statement."
You are incorrect thinking that.
Then we need to ask again:
That is a different question.
"That is a different question."
'diff' indicates that it is an identical question.
You must speak to Quee Fing. See if one of you can find out what an assembler is.
You won't need to travel far.
People in favour of generating nanosecond delay....answer this
How does a CPU running on 12MHz/12 = 1MHz be able to generate delay in nano second. each CPU machine cycle is of 1micro second. _unless you are headbent in writing everything in nano seconds_
is there any relation between the assembler and the 'time delay'? (as far as i know, time delays are generated on actual hardware....and assembler is a software that merely generates *.obj files which inturn are linked to generate *.hex file)
PS: I aint any genius. hence correct me, instead of criticizing. :)
View all questions in Keil forum