Dose anybody know the purpose of doing this
void Send_CmdNo(unsigned char command){ CSX=0; RDX=1; D_CX=0; DBUS=command; WRX=0; WRX=0; WRX=0; WRX=1; CSX=1; }
why use WRX=0 3 times b4 WRX=1? wouldn't that just give you WRX=1?
by the way WRX is just
sbit WRX = P7^2;
does it mean on the I/O pin the output will send out 0 three times and 1 one time? It performs that fast?
No, see it as a short delay, to make sure that the pulse isn't too short.
By the way - why not use a more descriptive thread title than "HELP!"? I can't really see how HELP! is a summary of your post.
so a short delay uh... I see so you can use 4 or 5 or 6 or 7? and excuse me for the title cuz I really can't think of another title
"so you can use 4 or 5 or 6 or 7?"
Of course you can!
As already explained, this just provides a "short" delay - so the more times you repeat it, the longer will be the delay!
It's just a matter of determining how much delay you need, and then using the appropriate number.
A word of warning:
In general, an optimising compiler (such as Keil C51) will look at code like this
fred = 0; fred = 0; fred = 0; fred = 1;
and realise that all the zero assignments are pointless - so it will optimise it to
fred = 1;
To prevent this, you need to use the standard 'C' volatile keyword - look it up in any 'C' textbook.
(sfr is a Keil extension to the standard 'C' language - it is implicitly volatile).
ok... thanks... i got it all but this part
and I am using the Keil C51, so all that is just pointless now???
Note This message was edited to reduce width.
If you're using a variable declared as sfr, the volatile keyword may be omitted. If you're using something else (for example, I/O mapped into xdata space or similar), you'll need to tell the compiler by using the volatile keyword.
even simpler do not optimize and keep you code debuggable.
Erik
do not optimize and keep you code debuggable
Erik,
Just out of interest (and I know off topic), could you tell me(us) whether this is something you would do for production code.
I would normally use 'near-to-max' (i.e., common block subroutines) for both the debugging and production builds - And only switch down when I have a really nasty problem to catch or when I suspect that the optimizer itself is causing a problem.
The reason I do this is because I want to give the optimized code as much of a thrasing as possible before it gets released.
David.
"I am using the Keil C51, so all that is just pointless now???"
No, it is not pointless.
It is very important that you understand the issue here - otherwise you will get caught out!
You need to understand what the 'volatile' keyword does, and when and why it is needed - this is general to any 'C' compiler including Keil.
You need to understand why you can omit it in the special case of the Keil-specific 'sfr' keyword extension.
Optimised code is still debuggable - it's just more difficult.
The higher the optimisation level, the harder the debugging gets...
David,
There's a saying that goes "Premature optimization is the root of all evil." from some guy named Donald Knuth (whoever that is :). I generally follow Eric's sentiment on this one. I start with the optimizer OFF and only turn it on if I find I cannot meet performance requirements without it.
The real difficulty (and the one I'm sure Erik will mention) is that optimized code is generally more difficult to debug since the source no longer necessarily aligns with the actual assembly code to be executed.
That said, I don't have quite as great a hatred for the optimizer as Erik. I think one of his main gripes is that there are quite a few sensible optimizations that do not affect debugability that should rightfully be done by the compiler.
-Jay Daniel
ABSOLUTLY!
Yes, the chip cost $0.10 more to hold the (slightly larger) non 'optimized' code but working in an environment where 4711 externally generated time and timing critical things seems to happen at once it is ESSENTIAL that the "production code" is debuggable AS IS. The fact that many of the devices we interface to do not adhere completely to the standards makes this even more essential. It is no use to tell the customer "the device you attached is not working correctly" the reply always is "they say it is" I had to jump through hoops to accept data from some very nasty device, still without making a 'special version'.
As to Optimised code is still debuggable - it's just more difficult. some issues are just not debuggable in optimized code. Debugging optimized code often require either turning the optimization off or "inserting a few instructions". Both change the timing.
I do not give a hoot about issues with debugging solid errors, that is a breeze whether optimized or not, but a nasty timing bug WILL change if you can not debug the code AS IS.
some issues are just not debuggable in optimized code.
As much as I agree with you about the optimizer being problematic, I declare shenanigans on this statement. Everything is debuggable, even in optimized code. The fact that you'll be brute-force figuring quite a bit out notwithstanding, it is as Andy says--just harder.
"it is as Andy says--just harder"
Though, of course, it can be so much harder that it is not worth the effort.
Again: debugging is possible; but maybe not practical (especially within the confines of a commercial environment).
Jay,
please advise how you would debug the following in optimized code based on an actual case.
Through some hard work you have determined that the thing that hits once a day is related to variable x being 47 when a certain thread of code is executed. You load up the ICE and set a breakpoint "if x =47 at this location, break" and find out that that particular place is made 'shared' by the optimizer. You turn the optimizer off and the problem gioes away. You add a bit of code where the intersting routine set a flag and test that flag in the shared code before the break and the problem goes away again. What do you do now???
Anyhow, in the actual case I finally found a way to catch the bugger, but, if I recall correctly, it took weeks of hard work.
So, yes, it is only harder to debug optimized code, but in some case so hard that 'impossible' is not the wrong word to use.
DO appreciate that the cases I refer to is not "debugging during development" but taking care of a customer complaint (i.e. fix it NOW)
I wasn't trying to imply that it couldn't make solving problems difficult, but here's a go:
First, I would try not looking through so narrow a window. You've narrowed it down to some condition in the software, but what conditions in the SYSTEM are happening? I would try to gather timing data about other events going on at the same time, specifically those that might have a reasonable chance of causing x to equal 47. I would try to capture relative timings on a scope or logic analyzer and see if the relative timings between any two events is consistent.
If and when I found the inter-event timing that seemed most likely to cause the problem, I would add in test code in such a way as to EXACERBATE the problem. That is, I would alter the timing of the code, but then make further alterations to try and make that condition very LIKELY to happen. Of course, this is easier for me since I'm dealing in theoreticals in a forum and not standing inside a factory muddling with a small mixed-signal scope.
There's a saying that goes "Premature optimization is the root of all evil." from some guy named Donald Knuth (whoever that is :).
Dear old Knuth (Norwegian professor in computer science) was not thinking about optimizing compilers. You shouldn't try to use clever things, or use more advanced algorithms, until you have made sure that you really need it. In short, a different way of saying KISS - Keep It Simple Stupid.
You shouldn't try to use clever things, or use more advanced algorithms, until you have made sure that you really need it. In short, a different way of saying KISS - Keep It Simple Stupid.
Per, i need your help, I have thought about this and, darn it, I can not think of a case where the simple solution is not the optimal one. Please note 'simple' does not mean 'allowing laziness'.
I'm quite surprised at the number of responses to my originating (off-topic) post.
I won't bore you by trying to give any smarty-pants reasons for why I do it the way I do; i.e., optimizations during both development and release phases.
But I would say that for the type of projects I've been involved with (for the past 20 odd years) my methods have not caused any great strain. Maybe I've been lucky.
I think one thing is clear - There is no one solution. A practice that is good for (for example) Erik may not be so good to (for example) me.
I am not saying one is better, or right, or wrong - But until I see a project that requires such practices, I'll stick with what I am comfortable with.
Anyway, nice to read your responses.
Cheers.
you have your opinion, I have mine and we will both "stick with what I am comfortable with".
That does not mean that the pitfalls and pratfalls of either should not be brought to light.
"you have your opinion, I have mine..."
There's more to it than just opinions - though they definitely play a (significant) part.
The real thing is: you have your particular set of requirements, and I have mine - so what may be "optimum" by your particular set of requirements may well not be optimum by mine
Personally, I like to use as similar compiler settings as possible while developing and when sending out a release. Mainly because of the few number of developers that uses embedded compilers, resulting in a larger probability of getting hit by compiler bugs compared to the mainstream PC compilers.
Usually, the only difference between a development and a release build is that the development build contains a number of extra integrity tests - where possible with regard to real-time requirements or space.
For some hardware peripherials it may also contain code stubs for regression testing, i.e. allowing the application itself to generate events without the need for external hardware so simulate a "user".
Most operations I do in embedded systems are quite simple, so there isn't much debugging needed. The majority of bugs can be found quite quickly during module tests, when there are no other threads or interrupts involved.
More advanced algorithms are almost always possible to build and debug on a PC.
The only type of bugs that I am really 100% absolutely scared to death about are timing glitches, since they are so extremely hard to find. They are seldom caused by simple code bugs, since I'm usually quite good at taking care of volatile variables, or synchronization between tasks.
They are often caused by either bad documentation (or bad reading skills from my side) in RTL API or 1000+ pages long datasheets. Sometimes they are caused by errors in my hardware (since I develop on early prototypes that may not be correctly designed) or silicon errors (since new products for price or power reasons often makes use of very new chips) where the error hasn't been found/documented yet, or the latest errata may be supplied first after contacting the distributor/chip vendor).
The bad part is of course that the timing glitches are hard to trig, and it is hard to decide how to attack the problem since the bugs are almost always caused by an incorrect assumption by myself or an undocumented "feature" in HW or RTL.
Because of the problems with embedded compilers, I tend to avoid the highest levels of optimization, unless I really need to squeeze in the firmware.
If possible, I also try to make the code regression-testable. Both to allow me to quickly detect new bugs that I have introduced, but also to allow me to constantly run the application in "accelerated time", i.e. running hour after hour with faked events arriving at a higher frequency than the real product should normally have to suffer.
I feel that the above tests would be less meaningful if I perform my tests with a different optimization level than the final release. I am actually more prone to switch to an alternative algorithm than to require max size or speed optimizations from the compiler.
When I do need to debug the code, I can normally live with the single-stepping jumping up and down, or multiple variables being optimized to use a single processor register. Seeing statements being executed in same order as the source code is seldom relevant. Timing affecting external signals can normally be measured with an oscilloscope. Timing issues making internal handling too slow (buffers over/underflowing, interrupts nesting or not being serviced before next interrupt occurs) are not so much affected by the execution order of instructions.
When hw timing really is extremely critical, then the only safe method is normally to switch to assembly to make sure that a changed compiler switch or compiler version doesn't break any assumptions.
One other note. When a chip is available with different amounts of memory, I always tries to get a prototype with extra memory, to allow test firmwares to contain extra code.
Sorry for the long post.
I probably won't find myself repeating it in that manner, but I do agree with the sentiments.
View all questions in Keil forum