I've had a working project/system for over 10 years now, and we developed an offshoot product using the same LPC936 chip. My delays on the working system uses Timer 0 and I have the interrupt set on 1 ms (millisecond). I use a routine like 'delay(500)' to delay 500ms. It works perfectly.
I copied (or so I thought) the delay routines over to my new product, and found out the delays were off substantially. I discovered that in my working system, I had timer variables defined as 'xdata'. In my new system, I had defined them as 'idata'. No other differences. I changed the new system to use 'xdata', and guess what, the delays started working perfectly. On the money, every time, every delay. I changed it back to 'idata' and a couple of delays are correct, but some are way under the proper delay. Some are longer then delay. What can cause that weird behavior?
I have a test case that does nothing but turns on an LED and delays, to cut out all other activity. This is the only timer running and no other interrupts are set up. I assume there is a register or stack corruption problem or some race condition when using idata over xdata, but don't know at this stage.
Any ideas?
Sutton
I agree with the above posters but I would normally take a different approach. Instead of having your timer_count4++ incrementing, I would use timer_count4-- and reverse my logic. This will invoke the compiler to use the DJNZ assemble instruction. More efficient and less likly to be changed by different compiles.
Of course, as will pointed out to you many times in the next few days, with a resolution of 100uS you should use the timer and assembler code for your delay functions.
Bradford
This will invoke the compiler to use the DJNZ assemble instruction.
Unlikely for a multi-byte integer that's in idata or xdata.
More efficient and less likly to be changed by different compiles.
I do wonder: what could possibly be the point of micro-optimization like that in a function whose entire purpose is to serve as a complete waste of time?