This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

HELP!

Dose anybody know the purpose of doing this

void Send_CmdNo(unsigned char command){

  CSX=0;
  RDX=1;
    D_CX=0;
        DBUS=command;
        WRX=0;
        WRX=0;
        WRX=0;
        WRX=1;
  CSX=1;
}

why use WRX=0 3 times b4 WRX=1?
wouldn't that just give you WRX=1?

by the way WRX is just

sbit WRX        = P7^2;

does it mean on the I/O pin the output will send out
0 three times and 1 one time? It performs that fast?

Parents
  • There's a saying that goes "Premature optimization is the root of all evil." from some guy named Donald Knuth (whoever that is :).

    Dear old Knuth (Norwegian professor in computer science) was not thinking about optimizing compilers. You shouldn't try to use clever things, or use more advanced algorithms, until you have made sure that you really need it. In short, a different way of saying KISS - Keep It Simple Stupid.

Reply
  • There's a saying that goes "Premature optimization is the root of all evil." from some guy named Donald Knuth (whoever that is :).

    Dear old Knuth (Norwegian professor in computer science) was not thinking about optimizing compilers. You shouldn't try to use clever things, or use more advanced algorithms, until you have made sure that you really need it. In short, a different way of saying KISS - Keep It Simple Stupid.

Children
  • You shouldn't try to use clever things, or use more advanced algorithms, until you have made sure that you really need it. In short, a different way of saying KISS - Keep It Simple Stupid.

    Per,
    i need your help, I have thought about this and, darn it, I can not think of a case where the simple solution is not the optimal one. Please note 'simple' does not mean 'allowing laziness'.

    Erik

  • I'm quite surprised at the number of responses to my originating (off-topic) post.

    I won't bore you by trying to give any smarty-pants reasons for why I do it the way I do; i.e., optimizations during both development and release phases.

    But I would say that for the type of projects I've been involved with (for the past 20 odd years) my methods have not caused any great strain. Maybe I've been lucky.

    I think one thing is clear - There is no one solution. A practice that is good for (for example) Erik may not be so good to (for example) me.

    I am not saying one is better, or right, or wrong - But until I see a project that requires such practices, I'll stick with what I am comfortable with.

    Anyway, nice to read your responses.

    Cheers.

  • I am not saying one is better, or right, or wrong - But until I see a project that requires such practices, I'll stick with what I am comfortable with.

    you have your opinion, I have mine and we will both "stick with what I am comfortable with".

    That does not mean that the pitfalls and pratfalls of either should not be brought to light.

    Erik

  • "you have your opinion, I have mine..."

    There's more to it than just opinions - though they definitely play a (significant) part.

    The real thing is: you have your particular set of requirements, and I have mine - so what may be "optimum" by your particular set of requirements may well not be optimum by mine

  • Personally, I like to use as similar compiler settings as possible while developing and when sending out a release. Mainly because of the few number of developers that uses embedded compilers, resulting in a larger probability of getting hit by compiler bugs compared to the mainstream PC compilers.

    Usually, the only difference between a development and a release build is that the development build contains a number of extra integrity tests - where possible with regard to real-time requirements or space.

    For some hardware peripherials it may also contain code stubs for regression testing, i.e. allowing the application itself to generate events without the need for external hardware so simulate a "user".

    Most operations I do in embedded systems are quite simple, so there isn't much debugging needed. The majority of bugs can be found quite quickly during module tests, when there are no other threads or interrupts involved.

    More advanced algorithms are almost always possible to build and debug on a PC.

    The only type of bugs that I am really 100% absolutely scared to death about are timing glitches, since they are so extremely hard to find. They are seldom caused by simple code bugs, since I'm usually quite good at taking care of volatile variables, or synchronization between tasks.

    They are often caused by either bad documentation (or bad reading skills from my side) in RTL API or 1000+ pages long datasheets. Sometimes they are caused by errors in my hardware (since I develop on early prototypes that may not be correctly designed) or silicon errors (since new products for price or power reasons often makes use of very new chips) where the error hasn't been found/documented yet, or the latest errata may be supplied first after contacting the distributor/chip vendor).

    The bad part is of course that the timing glitches are hard to trig, and it is hard to decide how to attack the problem since the bugs are almost always caused by an incorrect assumption by myself or an undocumented "feature" in HW or RTL.

    Because of the problems with embedded compilers, I tend to avoid the highest levels of optimization, unless I really need to squeeze in the firmware.

    If possible, I also try to make the code regression-testable. Both to allow me to quickly detect new bugs that I have introduced, but also to allow me to constantly run the application in "accelerated time", i.e. running hour after hour with faked events arriving at a higher frequency than the real product should normally have to suffer.

    I feel that the above tests would be less meaningful if I perform my tests with a different optimization level than the final release. I am actually more prone to switch to an alternative algorithm than to require max size or speed optimizations from the compiler.

    When I do need to debug the code, I can normally live with the single-stepping jumping up and down, or multiple variables being optimized to use a single processor register. Seeing statements being executed in same order as the source code is seldom relevant. Timing affecting external signals can normally be measured with an oscilloscope. Timing issues making internal handling too slow (buffers over/underflowing, interrupts nesting or not being serviced before next interrupt occurs) are not so much affected by the execution order of instructions.

    When hw timing really is extremely critical, then the only safe method is normally to switch to assembly to make sure that a changed compiler switch or compiler version doesn't break any assumptions.

    One other note. When a chip is available with different amounts of memory, I always tries to get a prototype with extra memory, to allow test firmwares to contain extra code.

    Sorry for the long post.

  • That does not mean that the pitfalls and pratfalls of either should not be brought to light.

    I probably won't find myself repeating it in that manner, but I do agree with the sentiments.