Dear sirs and madams
I present the following program to which i am simulating on a 8051 MCU
#include <reg51.h>
#include <stdio.h> #define output P0
void main () { unsigned int x;
while(1) { output = 1;
for(x=0; x<65000; x++) {
}
output = 0; for(x=0; x<65000; x++) {
My problem is that i am attempting to learn timing delays. For this program the P0 goes from 0 to 1 with a delay between. When i debug the program and run it with peripheral P0 open, it switches the value from 0 to 1 BUT the problem is its going to fast no matter how much delay i am putting in.
Please help me
Preferring something isn't the same as being stubborn and only accepting a single solution.
Embedded hardware often requires wait states to pace hardware. But that isn't really any delay loops. You normally don't need an exact delay but a minimum delay. So you can normally ignore interrupts.
And by the way - most compilers manages ns delays without assembler. There is normally a nop intrinsic that the compiler knows it must not remove, even when running at high optimization levels.
I would always prefer using a ON-CHIP TIMER to generate delays.
Fool. Try having a nanosecond delay with an on chip timer.
Assembler can be very useful. No risk of a compiler optimizing code away. Seems like it's a forgotten skill.
Have you tried a loop inside a loop?
int lp1, lp2, lp3; //and so on....depending upon who big the delay you want for(lp1=0; lp1<0xFFFF; lp1++) for(lp2=0; lp2<0xFFFF; lp2++) for(lp3=0; lp3<0xFFFF; lp3++); //Continue looping
By the way, the simulator may not give the _exact_ time delay as it may be generated on the actual hardware.
Have you considered googling about the issues with busy-loops that doesn't do anything. And the large number of recommendations to use timers to implement delays.
An observant developer might see it as a delay integrated in the processor silicon and not affected my the software limitations.
My problem is that I ignore instructions for posting code
View all questions in Keil forum