This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

HELP!

Dose anybody know the purpose of doing this

void Send_CmdNo(unsigned char command){

  CSX=0;
  RDX=1;
    D_CX=0;
        DBUS=command;
        WRX=0;
        WRX=0;
        WRX=0;
        WRX=1;
  CSX=1;
}

why use WRX=0 3 times b4 WRX=1?
wouldn't that just give you WRX=1?

by the way WRX is just

sbit WRX        = P7^2;

does it mean on the I/O pin the output will send out
0 three times and 1 one time? It performs that fast?

  • There's a saying that goes "Premature optimization is the root of all evil." from some guy named Donald Knuth (whoever that is :).

    Dear old Knuth (Norwegian professor in computer science) was not thinking about optimizing compilers. You shouldn't try to use clever things, or use more advanced algorithms, until you have made sure that you really need it. In short, a different way of saying KISS - Keep It Simple Stupid.

  • Jay,
    let me repeat:

    So, yes, it is only harder to debug optimized code, but in some case so hard that 'impossible' is not the wrong word to use.

    Try debugging a complex one in optimized code when the customer calls your boss 3 times a day "have you got it fixed". I would say that 'hard' does not cover, but 'impossible' does.

    As I stated in my prevous post I did get the 'impossible' solved, but the time it took was totally out of line.

    so, let me try this one and see if we can agree:

    debugging complex bugs in optimized code is impossible to do in a reasonable time

    Erik

  • "So, yes, it is only harder to debug optimized code, but in some case so hard that 'impossible' is not the wrong word to use."

    The word I prefer is, "Impractical"

    Though I suppose you could take that to mean, "impossible for all practical purposes"...

    ;-)

  • You shouldn't try to use clever things, or use more advanced algorithms, until you have made sure that you really need it. In short, a different way of saying KISS - Keep It Simple Stupid.

    Per,
    i need your help, I have thought about this and, darn it, I can not think of a case where the simple solution is not the optimal one. Please note 'simple' does not mean 'allowing laziness'.

    Erik

  • I'm quite surprised at the number of responses to my originating (off-topic) post.

    I won't bore you by trying to give any smarty-pants reasons for why I do it the way I do; i.e., optimizations during both development and release phases.

    But I would say that for the type of projects I've been involved with (for the past 20 odd years) my methods have not caused any great strain. Maybe I've been lucky.

    I think one thing is clear - There is no one solution. A practice that is good for (for example) Erik may not be so good to (for example) me.

    I am not saying one is better, or right, or wrong - But until I see a project that requires such practices, I'll stick with what I am comfortable with.

    Anyway, nice to read your responses.

    Cheers.

  • The word I prefer is, "Impractical"

    Though I suppose you could take that to mean, "impossible for all practical purposes"...

    Mr. language expert:
    while I agree with you in principle I would offer this analogy: "it is impossible to strike a match on a bar of soap unless it is cooled down in liquid nitrogen

    Does 'impossible' apply?

    Erik

    BTW I was behing a truck from a demolition company at a stop light this morning and the bumper sticker read "everything is possible using heavy explosives". So yes, while you and I would agree that it is not pssible to remove Mt Everest, as a matter of fact it is :)

  • I am not saying one is better, or right, or wrong - But until I see a project that requires such practices, I'll stick with what I am comfortable with.

    you have your opinion, I have mine and we will both "stick with what I am comfortable with".

    That does not mean that the pitfalls and pratfalls of either should not be brought to light.

    Erik

  • "you have your opinion, I have mine..."

    There's more to it than just opinions - though they definitely play a (significant) part.

    The real thing is: you have your particular set of requirements, and I have mine - so what may be "optimum" by your particular set of requirements may well not be optimum by mine

  • Personally, I like to use as similar compiler settings as possible while developing and when sending out a release. Mainly because of the few number of developers that uses embedded compilers, resulting in a larger probability of getting hit by compiler bugs compared to the mainstream PC compilers.

    Usually, the only difference between a development and a release build is that the development build contains a number of extra integrity tests - where possible with regard to real-time requirements or space.

    For some hardware peripherials it may also contain code stubs for regression testing, i.e. allowing the application itself to generate events without the need for external hardware so simulate a "user".

    Most operations I do in embedded systems are quite simple, so there isn't much debugging needed. The majority of bugs can be found quite quickly during module tests, when there are no other threads or interrupts involved.

    More advanced algorithms are almost always possible to build and debug on a PC.

    The only type of bugs that I am really 100% absolutely scared to death about are timing glitches, since they are so extremely hard to find. They are seldom caused by simple code bugs, since I'm usually quite good at taking care of volatile variables, or synchronization between tasks.

    They are often caused by either bad documentation (or bad reading skills from my side) in RTL API or 1000+ pages long datasheets. Sometimes they are caused by errors in my hardware (since I develop on early prototypes that may not be correctly designed) or silicon errors (since new products for price or power reasons often makes use of very new chips) where the error hasn't been found/documented yet, or the latest errata may be supplied first after contacting the distributor/chip vendor).

    The bad part is of course that the timing glitches are hard to trig, and it is hard to decide how to attack the problem since the bugs are almost always caused by an incorrect assumption by myself or an undocumented "feature" in HW or RTL.

    Because of the problems with embedded compilers, I tend to avoid the highest levels of optimization, unless I really need to squeeze in the firmware.

    If possible, I also try to make the code regression-testable. Both to allow me to quickly detect new bugs that I have introduced, but also to allow me to constantly run the application in "accelerated time", i.e. running hour after hour with faked events arriving at a higher frequency than the real product should normally have to suffer.

    I feel that the above tests would be less meaningful if I perform my tests with a different optimization level than the final release. I am actually more prone to switch to an alternative algorithm than to require max size or speed optimizations from the compiler.

    When I do need to debug the code, I can normally live with the single-stepping jumping up and down, or multiple variables being optimized to use a single processor register. Seeing statements being executed in same order as the source code is seldom relevant. Timing affecting external signals can normally be measured with an oscilloscope. Timing issues making internal handling too slow (buffers over/underflowing, interrupts nesting or not being serviced before next interrupt occurs) are not so much affected by the execution order of instructions.

    When hw timing really is extremely critical, then the only safe method is normally to switch to assembly to make sure that a changed compiler switch or compiler version doesn't break any assumptions.

    One other note. When a chip is available with different amounts of memory, I always tries to get a prototype with extra memory, to allow test firmwares to contain extra code.

    Sorry for the long post.

  • Since no one actually answered your request, here goes.

    This looks like a byte wide interface to a peripheral with ChipSelect(CSX), Read(RDX), Data/Command(D_CX), and Write(WRX) using individual I/O pins and DBUS as the eight bit bus.

    The first line "CSX=0" drives the Chip Select low and selects the device. Note: the chip select remains low till the last instruction of the function.

    Subsequent lines set Command mode, put the data on the bus, and pulse the Write line (a normally high signal with a low going pulse to write data).

    WRX=0 repeated three times does not actually change the port pin three times, it just kills time, perhaps to allow the peripheral time to capture the incoming data.

  • That does not mean that the pitfalls and pratfalls of either should not be brought to light.

    I probably won't find myself repeating it in that manner, but I do agree with the sentiments.

  • "no one actually answered your request"

    Yes they did - and also his subsequent request which followed-on from that answer!

  • A Clearer solution might be to use the _nop_ macro

    void Send_CmdNo(unsigned char command)
    {
       CSX = 0;
       RDX = 1;
       D_CX = 0;
       DBUS = command;
       WRX = 0;
       _nop_;   // Short Delay
       _nop_;
       WRX = 1;
       CSX = 1;
    }
    


    Of course the timing would need to be verified

  • Note that _nop_ is an Intrinsic Function - so the code should be:

    void Send_CmdNo(unsigned char command)
    {
       CSX = 0;
       RDX = 1;
       D_CX = 0;
       DBUS = command;
       WRX = 0;
       _nop_();   // Short Delay
       _nop_();
       WRX = 1;
       CSX = 1;
    }
    

    note the parentheses.

    http://www.keil.com/support/man/docs/c51/c51__nop_.htm

  • thanks guys... so many replies altho most is a bit off the topic but I still appreciate it ^^
    I will study into it :)