We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Dose anybody know the purpose of doing this
void Send_CmdNo(unsigned char command){ CSX=0; RDX=1; D_CX=0; DBUS=command; WRX=0; WRX=0; WRX=0; WRX=1; CSX=1; }
why use WRX=0 3 times b4 WRX=1? wouldn't that just give you WRX=1?
by the way WRX is just
sbit WRX = P7^2;
does it mean on the I/O pin the output will send out 0 three times and 1 one time? It performs that fast?
Personally, I like to use as similar compiler settings as possible while developing and when sending out a release. Mainly because of the few number of developers that uses embedded compilers, resulting in a larger probability of getting hit by compiler bugs compared to the mainstream PC compilers.
Usually, the only difference between a development and a release build is that the development build contains a number of extra integrity tests - where possible with regard to real-time requirements or space.
For some hardware peripherials it may also contain code stubs for regression testing, i.e. allowing the application itself to generate events without the need for external hardware so simulate a "user".
Most operations I do in embedded systems are quite simple, so there isn't much debugging needed. The majority of bugs can be found quite quickly during module tests, when there are no other threads or interrupts involved.
More advanced algorithms are almost always possible to build and debug on a PC.
The only type of bugs that I am really 100% absolutely scared to death about are timing glitches, since they are so extremely hard to find. They are seldom caused by simple code bugs, since I'm usually quite good at taking care of volatile variables, or synchronization between tasks.
They are often caused by either bad documentation (or bad reading skills from my side) in RTL API or 1000+ pages long datasheets. Sometimes they are caused by errors in my hardware (since I develop on early prototypes that may not be correctly designed) or silicon errors (since new products for price or power reasons often makes use of very new chips) where the error hasn't been found/documented yet, or the latest errata may be supplied first after contacting the distributor/chip vendor).
The bad part is of course that the timing glitches are hard to trig, and it is hard to decide how to attack the problem since the bugs are almost always caused by an incorrect assumption by myself or an undocumented "feature" in HW or RTL.
Because of the problems with embedded compilers, I tend to avoid the highest levels of optimization, unless I really need to squeeze in the firmware.
If possible, I also try to make the code regression-testable. Both to allow me to quickly detect new bugs that I have introduced, but also to allow me to constantly run the application in "accelerated time", i.e. running hour after hour with faked events arriving at a higher frequency than the real product should normally have to suffer.
I feel that the above tests would be less meaningful if I perform my tests with a different optimization level than the final release. I am actually more prone to switch to an alternative algorithm than to require max size or speed optimizations from the compiler.
When I do need to debug the code, I can normally live with the single-stepping jumping up and down, or multiple variables being optimized to use a single processor register. Seeing statements being executed in same order as the source code is seldom relevant. Timing affecting external signals can normally be measured with an oscilloscope. Timing issues making internal handling too slow (buffers over/underflowing, interrupts nesting or not being serviced before next interrupt occurs) are not so much affected by the execution order of instructions.
When hw timing really is extremely critical, then the only safe method is normally to switch to assembly to make sure that a changed compiler switch or compiler version doesn't break any assumptions.
One other note. When a chip is available with different amounts of memory, I always tries to get a prototype with extra memory, to allow test firmwares to contain extra code.
Sorry for the long post.