I use Keil sometimes and while using a timer in autoload mode, the TH0 is loaded with the calculated value but TL0 starts incrementing from 00 instead of the calculated value F0.Am I missing something while using or configuring the keil simulator?
"no simulator is perfect, whatever is documented in the datasheet for the chip may and may not be correctly simulated."
reading comprehension, erik.
that "it" refers to his statement about the chip loading up both TH0/TL0 once set in mode 2. the chip does NOT do that. and the simulator correctly simulates that portion of the functionality.
and his code runs also as expected in the simulator, as it did on real hardware - because I tried. none of what he "observed" could be observed in my set-up, or inferred from the datasheet, or simulations.
I am going to incline to say that this is a user (observational) problem.
Even though I understand that the simulator is not designed to operate in a specific way for certain modes does not mean that it should get stuck at certain instruction and does not return to the main program or execute the next set of instructions. If you run the code u will find that its stuck in the while loop with timer getting loaded and overflow repeatedly and the program does not return to main. I dont think any datasheet specifies such abnormal operation. How does this simulator help in debug mode when it gets stuck at some point for such a small program?
How does this simulator help in debug mode when it gets stuck at some point for such a small program? The issue is not the size of the program, but whether the particular operation were correctly implemented when the simulator was made. You are/were making a somewhat 'exotic' piece of code. Using reload "without making use of the feature" and I can easily see such "slipping through the cracks". I (and I guess most) have never used reload and stop together. Reload is usually used with a "timer tick" that you never stop.
Erik
"I can easily see such "slipping through the cracks"."
that's the problem throughout the whole discussion. his code absolutely works: either in simulation or on actual hardware.
and his code has identical return mechanism to yours. so if his code couldn't return, yours couldn't either.
all you need to do is to load his code into the compiler and run it through.
the problem is NOT the code.
all you need to do is to load his code into the compiler and run it through. I see no problem in it compiling, but the discussion is about simulating
PS I got rid of the simulator ages ago when it took me through some frustrating sequences. If I can't emulate/ICE/JTAG/... I do not do.
"Ashley,
Why don't you just help the poor chap?!"
A leopard can't change its spots. It's just not Ashley's way. Here is today's example of Ashley's ("millwood" and "fdan00" on the Microchip forum) gentle hand being helpful:
www.microchip.com/.../fb.ashx
So you see, it's a disorder of sorts ... and it does get worse over time.
"">www.microchip.com/.../fb.ashx
i got some "internal error" when i clicked on that link.
anybody else?
Link works for me.
that's a pretty entertaining discussion, :)
my favorite:
union { float f; unsigned long ul; } u; unsigned char a, b, c, d; u.ul = (a << 24) | (b << 16) | (c << 8) | d;
is that representative of PIC programmers?
in terms of speed, the "left shift" approach takes 340 - 415us, on a 89c51 running a 24Mhz crystal, to convert four unsigned char types into a floater.
in comparison, the "union" approach (assigning the values to four members of the union) takes 4us under the same condition.
for a speed differential of 85 - 100x, against the "left shift" approach.
per's pointer-conversion approach takes 8us to finish.
totally non-scientific.
The "left shift" approach depends a lot on the used compiler. Some compilers detects the shifts of n*8 bits as byte-addressing. Some compilers may further notice if the bytes are of the correct order relative to the processor, in which case everything can merge into a single assign.
Next thing is that older processors may suffer a lot from shift operations, while newer processors often have barrel shifters in which case the number of steps to shift doesn't matter.
In the end, the runtime speed is seldom a problem if the type conversion between bytes and a float or double is caused by a serial transfer. The code size difference may be more important.
"... while newer processors often have barrel shifters in which case the number of steps to shift doesn't matter."
Some FPGA soft-core processors are nice in that you can enable inclusion of a barrel shifter if your application benefits from it; otherwise, not having one leaves resources free for other things.
"The "left shift" approach depends a lot on the used compiler. "
and hardware too. I compiled it for the cortex-m3 chips (on mdk and iar) and the speed differential is considerably smaller - makes a lot of sense.
I guess the 8-bit devices incur a large penalty in processing wider data types.
"I guess the 8-bit devices incur a large penalty in processing wider data types."
A strict 8-bit processor shouldn't not need any penalty since a decent compiler should manage to detect that the operation is four 8-bit reads and four 8-bit writes. The cost should mainly be the ability to work with pointers.
With a bad compiler and no barrel shifter, the cost can be tremendous, but it is a disgrace if the compiler does not contain logic for detecting shifts resulting in 8-bit aligned reads and writes.
Posting queries on this forum is useless.One hardly gets answers except abusing each other and pointing at trivial issues