We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Dose anybody know the purpose of doing this
void Send_CmdNo(unsigned char command){ CSX=0; RDX=1; D_CX=0; DBUS=command; WRX=0; WRX=0; WRX=0; WRX=1; CSX=1; }
why use WRX=0 3 times b4 WRX=1? wouldn't that just give you WRX=1?
by the way WRX is just
sbit WRX = P7^2;
does it mean on the I/O pin the output will send out 0 three times and 1 one time? It performs that fast?
do not optimize and keep you code debuggable
Erik,
Just out of interest (and I know off topic), could you tell me(us) whether this is something you would do for production code.
I would normally use 'near-to-max' (i.e., common block subroutines) for both the debugging and production builds - And only switch down when I have a really nasty problem to catch or when I suspect that the optimizer itself is causing a problem.
The reason I do this is because I want to give the optimized code as much of a thrasing as possible before it gets released.
David.
David,
There's a saying that goes "Premature optimization is the root of all evil." from some guy named Donald Knuth (whoever that is :). I generally follow Eric's sentiment on this one. I start with the optimizer OFF and only turn it on if I find I cannot meet performance requirements without it.
The real difficulty (and the one I'm sure Erik will mention) is that optimized code is generally more difficult to debug since the source no longer necessarily aligns with the actual assembly code to be executed.
That said, I don't have quite as great a hatred for the optimizer as Erik. I think one of his main gripes is that there are quite a few sensible optimizations that do not affect debugability that should rightfully be done by the compiler.
-Jay Daniel
ABSOLUTLY!
Yes, the chip cost $0.10 more to hold the (slightly larger) non 'optimized' code but working in an environment where 4711 externally generated time and timing critical things seems to happen at once it is ESSENTIAL that the "production code" is debuggable AS IS. The fact that many of the devices we interface to do not adhere completely to the standards makes this even more essential. It is no use to tell the customer "the device you attached is not working correctly" the reply always is "they say it is" I had to jump through hoops to accept data from some very nasty device, still without making a 'special version'.
As to Optimised code is still debuggable - it's just more difficult. some issues are just not debuggable in optimized code. Debugging optimized code often require either turning the optimization off or "inserting a few instructions". Both change the timing.
I do not give a hoot about issues with debugging solid errors, that is a breeze whether optimized or not, but a nasty timing bug WILL change if you can not debug the code AS IS.
Erik
some issues are just not debuggable in optimized code.
As much as I agree with you about the optimizer being problematic, I declare shenanigans on this statement. Everything is debuggable, even in optimized code. The fact that you'll be brute-force figuring quite a bit out notwithstanding, it is as Andy says--just harder.
"it is as Andy says--just harder"
Though, of course, it can be so much harder that it is not worth the effort.
Again: debugging is possible; but maybe not practical (especially within the confines of a commercial environment).
Jay,
please advise how you would debug the following in optimized code based on an actual case.
Through some hard work you have determined that the thing that hits once a day is related to variable x being 47 when a certain thread of code is executed. You load up the ICE and set a breakpoint "if x =47 at this location, break" and find out that that particular place is made 'shared' by the optimizer. You turn the optimizer off and the problem gioes away. You add a bit of code where the intersting routine set a flag and test that flag in the shared code before the break and the problem goes away again. What do you do now???
Anyhow, in the actual case I finally found a way to catch the bugger, but, if I recall correctly, it took weeks of hard work.
So, yes, it is only harder to debug optimized code, but in some case so hard that 'impossible' is not the wrong word to use.
DO appreciate that the cases I refer to is not "debugging during development" but taking care of a customer complaint (i.e. fix it NOW)
I wasn't trying to imply that it couldn't make solving problems difficult, but here's a go:
First, I would try not looking through so narrow a window. You've narrowed it down to some condition in the software, but what conditions in the SYSTEM are happening? I would try to gather timing data about other events going on at the same time, specifically those that might have a reasonable chance of causing x to equal 47. I would try to capture relative timings on a scope or logic analyzer and see if the relative timings between any two events is consistent.
If and when I found the inter-event timing that seemed most likely to cause the problem, I would add in test code in such a way as to EXACERBATE the problem. That is, I would alter the timing of the code, but then make further alterations to try and make that condition very LIKELY to happen. Of course, this is easier for me since I'm dealing in theoreticals in a forum and not standing inside a factory muddling with a small mixed-signal scope.
Jay, let me repeat:
Try debugging a complex one in optimized code when the customer calls your boss 3 times a day "have you got it fixed". I would say that 'hard' does not cover, but 'impossible' does.
As I stated in my prevous post I did get the 'impossible' solved, but the time it took was totally out of line.
so, let me try this one and see if we can agree:
debugging complex bugs in optimized code is impossible to do in a reasonable time
"So, yes, it is only harder to debug optimized code, but in some case so hard that 'impossible' is not the wrong word to use."
The word I prefer is, "Impractical"
Though I suppose you could take that to mean, "impossible for all practical purposes"...
;-)
Mr. language expert: while I agree with you in principle I would offer this analogy: "it is impossible to strike a match on a bar of soap unless it is cooled down in liquid nitrogen
Does 'impossible' apply?
BTW I was behing a truck from a demolition company at a stop light this morning and the bumper sticker read "everything is possible using heavy explosives". So yes, while you and I would agree that it is not pssible to remove Mt Everest, as a matter of fact it is :)
If I or one of my team released code that had as many timing problems and bugs of the type as Erik seems to have to fix then I would be very concerned.
We design and implement projects for small through to large embedded systems that frequently have to keep track of a very large number of asynchronous events. As part of the design phase, we attempt to predict as many eventualities as possible; thus reducing the chances of having the open windows for timing problems in the first place.
Timing related bugs normally find their way through gaps in the code.
A good designer should aim to eliminate those gaps in the first place.
Yes, most timing glitches are caused by invalid assumptions. Everyone with experience knows to plan ahead and design around timing issues, but the problem is that when all facts are not 100% available, assumtions must be made. You have to realize that when there are gaps in the documentation (customer requirements, protocol specifications, datasheets, RTL manuals), it is very hard not to get gaps in the code.
Because of the big problem finding and isolating them, it may be enough to have to hunt one or two every year to give them top priority, and adjust the development process in ways that helps spotting and correcting them.
As part of the design phase, we attempt to predict as many eventualities as possible; thus reducing the chances of having the open windows for timing problems in the first place. Elementary my dear Watson. Any reasonable design will be done that way.
a) but the problem is that when all facts are not 100% available, assumtions must be made. b) when stated facts are not correct. c) when stated facts are ignored.
I have had experiences where the units I work with work well with brands a, b and c and when someone buy brand d it does not work. The difference between a, b, c and d is which part of the spec they ignore.
I would dearly love to be able to state "my products will not operate with equipment that does not follow the standards" but that is a pipe dream.