We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Dose anybody know the purpose of doing this
void Send_CmdNo(unsigned char command){ CSX=0; RDX=1; D_CX=0; DBUS=command; WRX=0; WRX=0; WRX=0; WRX=1; CSX=1; }
why use WRX=0 3 times b4 WRX=1? wouldn't that just give you WRX=1?
by the way WRX is just
sbit WRX = P7^2;
does it mean on the I/O pin the output will send out 0 three times and 1 one time? It performs that fast?
ok... thanks... i got it all but this part
and I am using the Keil C51, so all that is just pointless now???
Note This message was edited to reduce width.
If you're using a variable declared as sfr, the volatile keyword may be omitted. If you're using something else (for example, I/O mapped into xdata space or similar), you'll need to tell the compiler by using the volatile keyword.
even simpler do not optimize and keep you code debuggable.
Erik
do not optimize and keep you code debuggable
Erik,
Just out of interest (and I know off topic), could you tell me(us) whether this is something you would do for production code.
I would normally use 'near-to-max' (i.e., common block subroutines) for both the debugging and production builds - And only switch down when I have a really nasty problem to catch or when I suspect that the optimizer itself is causing a problem.
The reason I do this is because I want to give the optimized code as much of a thrasing as possible before it gets released.
David.
"I am using the Keil C51, so all that is just pointless now???"
No, it is not pointless.
It is very important that you understand the issue here - otherwise you will get caught out!
You need to understand what the 'volatile' keyword does, and when and why it is needed - this is general to any 'C' compiler including Keil.
You need to understand why you can omit it in the special case of the Keil-specific 'sfr' keyword extension.
Optimised code is still debuggable - it's just more difficult.
The higher the optimisation level, the harder the debugging gets...
David,
There's a saying that goes "Premature optimization is the root of all evil." from some guy named Donald Knuth (whoever that is :). I generally follow Eric's sentiment on this one. I start with the optimizer OFF and only turn it on if I find I cannot meet performance requirements without it.
The real difficulty (and the one I'm sure Erik will mention) is that optimized code is generally more difficult to debug since the source no longer necessarily aligns with the actual assembly code to be executed.
That said, I don't have quite as great a hatred for the optimizer as Erik. I think one of his main gripes is that there are quite a few sensible optimizations that do not affect debugability that should rightfully be done by the compiler.
-Jay Daniel
ABSOLUTLY!
Yes, the chip cost $0.10 more to hold the (slightly larger) non 'optimized' code but working in an environment where 4711 externally generated time and timing critical things seems to happen at once it is ESSENTIAL that the "production code" is debuggable AS IS. The fact that many of the devices we interface to do not adhere completely to the standards makes this even more essential. It is no use to tell the customer "the device you attached is not working correctly" the reply always is "they say it is" I had to jump through hoops to accept data from some very nasty device, still without making a 'special version'.
As to Optimised code is still debuggable - it's just more difficult. some issues are just not debuggable in optimized code. Debugging optimized code often require either turning the optimization off or "inserting a few instructions". Both change the timing.
I do not give a hoot about issues with debugging solid errors, that is a breeze whether optimized or not, but a nasty timing bug WILL change if you can not debug the code AS IS.
some issues are just not debuggable in optimized code.
As much as I agree with you about the optimizer being problematic, I declare shenanigans on this statement. Everything is debuggable, even in optimized code. The fact that you'll be brute-force figuring quite a bit out notwithstanding, it is as Andy says--just harder.
"it is as Andy says--just harder"
Though, of course, it can be so much harder that it is not worth the effort.
Again: debugging is possible; but maybe not practical (especially within the confines of a commercial environment).
Jay,
please advise how you would debug the following in optimized code based on an actual case.
Through some hard work you have determined that the thing that hits once a day is related to variable x being 47 when a certain thread of code is executed. You load up the ICE and set a breakpoint "if x =47 at this location, break" and find out that that particular place is made 'shared' by the optimizer. You turn the optimizer off and the problem gioes away. You add a bit of code where the intersting routine set a flag and test that flag in the shared code before the break and the problem goes away again. What do you do now???
Anyhow, in the actual case I finally found a way to catch the bugger, but, if I recall correctly, it took weeks of hard work.
So, yes, it is only harder to debug optimized code, but in some case so hard that 'impossible' is not the wrong word to use.
DO appreciate that the cases I refer to is not "debugging during development" but taking care of a customer complaint (i.e. fix it NOW)
I wasn't trying to imply that it couldn't make solving problems difficult, but here's a go:
First, I would try not looking through so narrow a window. You've narrowed it down to some condition in the software, but what conditions in the SYSTEM are happening? I would try to gather timing data about other events going on at the same time, specifically those that might have a reasonable chance of causing x to equal 47. I would try to capture relative timings on a scope or logic analyzer and see if the relative timings between any two events is consistent.
If and when I found the inter-event timing that seemed most likely to cause the problem, I would add in test code in such a way as to EXACERBATE the problem. That is, I would alter the timing of the code, but then make further alterations to try and make that condition very LIKELY to happen. Of course, this is easier for me since I'm dealing in theoreticals in a forum and not standing inside a factory muddling with a small mixed-signal scope.
There's a saying that goes "Premature optimization is the root of all evil." from some guy named Donald Knuth (whoever that is :).
Dear old Knuth (Norwegian professor in computer science) was not thinking about optimizing compilers. You shouldn't try to use clever things, or use more advanced algorithms, until you have made sure that you really need it. In short, a different way of saying KISS - Keep It Simple Stupid.
Jay, let me repeat:
Try debugging a complex one in optimized code when the customer calls your boss 3 times a day "have you got it fixed". I would say that 'hard' does not cover, but 'impossible' does.
As I stated in my prevous post I did get the 'impossible' solved, but the time it took was totally out of line.
so, let me try this one and see if we can agree:
debugging complex bugs in optimized code is impossible to do in a reasonable time
"So, yes, it is only harder to debug optimized code, but in some case so hard that 'impossible' is not the wrong word to use."
The word I prefer is, "Impractical"
Though I suppose you could take that to mean, "impossible for all practical purposes"...
;-)
You shouldn't try to use clever things, or use more advanced algorithms, until you have made sure that you really need it. In short, a different way of saying KISS - Keep It Simple Stupid.
Per, i need your help, I have thought about this and, darn it, I can not think of a case where the simple solution is not the optimal one. Please note 'simple' does not mean 'allowing laziness'.