We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Dear sirs and madams
I present the following program to which i am simulating on a 8051 MCU
#include <reg51.h>
#include <stdio.h> #define output P0
void main () { unsigned int x;
while(1) { output = 1;
for(x=0; x<65000; x++) {
}
output = 0; for(x=0; x<65000; x++) {
My problem is that i am attempting to learn timing delays. For this program the P0 goes from 0 to 1 with a delay between. When i debug the program and run it with peripheral P0 open, it switches the value from 0 to 1 BUT the problem is its going to fast no matter how much delay i am putting in.
Please help me
'diff' indicates that it is an identical question.
You must speak to Quee Fing. See if one of you can find out what an assembler is.
You won't need to travel far.
People in favour of generating nanosecond delay....answer this
How does a CPU running on 12MHz/12 = 1MHz be able to generate delay in nano second. each CPU machine cycle is of 1micro second. _unless you are headbent in writing everything in nano seconds_
is there any relation between the assembler and the 'time delay'? (as far as i know, time delays are generated on actual hardware....and assembler is a software that merely generates *.obj files which inturn are linked to generate *.hex file)
PS: I aint any genius. hence correct me, instead of criticizing. :)