I am using an AT89C53-24 microcontroller with a 12.000 MHz crystal. My program uses hardware timer (T1-Mode 1, interrupts enabled) to provide a 20 mSec heartbeat, and hardware timer (T0-Mode 1, interrupts not enabled) to provide delays (at a 1 mSec rate). According to the performance analyzer, the measured delay for a 500 mSec delay request using T0 is on the order of 500.62 mSec.
When I observe this delay running on the actual hardware (applying an 7-seg LED count, by eye only) it seems more like the order of 200 mSec. I changed the delay to 1000 mSec, and the performance analyzer gave 1.0019 Seconds, but there is no perceived change in the rate on the actual hardware. I have checked:
The CPU frequency matches the target option frequency in the simulator. The CPU selected as the target matches the CPU used in the hardware. The CPU device selected in the programmer matches the CPU to be programmed. The behavior is the same independent of the interrupt status for timer 1
I could not find in the simulator literature any notes concerning the relationship between the simulated and actual performance. Is there a link between the two timers? As one is "firing" at 20 mSec (under interrupt) I could see a slight deviation 1/20th of the time to the 1 mSec timer based loop, but I thought they were hardware driven and as such completely independent. Any ideas? Have I missed some fundamental concept about timers on the 8051 family?
Any help would be appreciated.