I've had a working project/system for over 10 years now, and we developed an offshoot product using the same LPC936 chip. My delays on the working system uses Timer 0 and I have the interrupt set on 1 ms (millisecond). I use a routine like 'delay(500)' to delay 500ms. It works perfectly.
I copied (or so I thought) the delay routines over to my new product, and found out the delays were off substantially. I discovered that in my working system, I had timer variables defined as 'xdata'. In my new system, I had defined them as 'idata'. No other differences. I changed the new system to use 'xdata', and guess what, the delays started working perfectly. On the money, every time, every delay. I changed it back to 'idata' and a couple of delays are correct, but some are way under the proper delay. Some are longer then delay. What can cause that weird behavior?
I have a test case that does nothing but turns on an LED and delays, to cut out all other activity. This is the only timer running and no other interrupts are set up. I assume there is a register or stack corruption problem or some race condition when using idata over xdata, but don't know at this stage.
Any ideas?
Sutton
Look at you assembly code. I think that you will often see a three instruction using the MOVX instruction were the Idata move can be a MOV one instruction fetch.
Just a guess on my part.
Bradford
Hmm, it could be that the ISR doesn't complete until the next interrupt request and you're loosing interrupts. You might also have ISR masking issues.
My guess is that you simply reset the timer to 0 in the ISR instead of Subtracting the cycle from the timer register, to compensate for time passed since the interrupt was requested.
See the code below. xdata var of timer_count4 works fine. idata does not.
0xFE8E is a 100us delay using the internal RC oscillator (3.6875Mhz for peripherals)
unsigned int xdata timer_count4; void timer1 (void) interrupt 3 { timer_count4++; // used in delay2() for time expire TH1 = 0xFE; TL1 = 0x8E; TF1 = 0; // reset overflow flag, which causes interrupt } delay2(unsigned int x) { unsigned int xdata numloops; TH1 = 0xFE; TL1 = 0x8E; numloops = x; // 1 interrupt = 1 US timer_count4 = 0; TR1 = 1; for(;;) { if (timer_count4 >= numloops) { TR1 = 0; return; } } } void test() { for(;;) { LED = 1; delay2(1); // 100us LED = 0; delay2(1); // 100us } }
unsigned int xdata timer_count4;
That's the root of your problem right there. That line is missing a "volatile" qualifier. As-is, this only ever worked by way of luck rather than design. Changing the memory class allowed the optimizer to work better, which exposed the long-standing bug.
For extra protection you would have to disable interrupts around every access to timer_count4 outside the ISR, because an unsigned int is too big for a '51 to access atomically. Or you might be better off using a longer timer interval so your counter doesn't have to count quite that fast, and you can make do with an 8-bit range.
Only partially understood there.
The volatile on it's own will not be sufficient to guarantee successful cross access of the variable.
Extra protection IS needed for anything other than an 8 bit variable in this scenario - period.
I agree with the above posters but I would normally take a different approach. Instead of having your timer_count4++ incrementing, I would use timer_count4-- and reverse my logic. This will invoke the compiler to use the DJNZ assemble instruction. More efficient and less likly to be changed by different compiles.
Of course, as will pointed out to you many times in the next few days, with a resolution of 100uS you should use the timer and assembler code for your delay functions.
This will invoke the compiler to use the DJNZ assemble instruction.
Unlikely for a multi-byte integer that's in idata or xdata.
More efficient and less likly to be changed by different compiles.
I do wonder: what could possibly be the point of micro-optimization like that in a function whose entire purpose is to serve as a complete waste of time?