Hello,
See here:
www.open-std.org/.../C99RationaleV5.10.pdf
Is it possible that "Jack Sprat", the staunch defender of the C standard as the ultimate reference when writing programs, missed the following statement?
C code can be non-portable. Although it strove to give programmers the opportunity to write truly portable programs, the C89 Committee did not want to force programmers into writing portably, to preclude the use of C as a “high-level assembler”: the ability to write machine- 35 specific code is one of the strengths of C. It is this principle which largely motivates drawing the distinction between strictly conforming program and conforming program (§4).
this is precisely what Per Westermark has been saying. Exactly what Erik Malund has been saying. Remember: Jack Sprat claims often that writing a program that complies with the C standard is a GUARANTEE for its correct functioning.
the overseas programmers may be cheaper but they have not mastered the right way of doing large / complex projects. language, as in communications / documentations, is another big hurdle that in my view renders the overseas shops non-competitive in this segment.
While the local competitors are OOXX Level n qualified, the company I am currently working for, is a OOXX Level n+1 qualified (Supposed to be better). (I come from Taiwan.)
People here are talking about modularizing, the modularizing is for code reusability and portability. Where the code reusability and portability are aiming at 8-bits/16-bits MCUs like PIC Microcontrollers and others, but 32-bits MCUs may also be included.
To achieve the modularizing, someone proposed an idea/rule for C programming, where fileA.c are not allowed to include fileA.h, he claims this is for reducing the cohesion and data coupling. I have no choice but to try to stop him.
I also encountered a lot of other amazing stuff here.
The first official case of modularizing is to produce a reusable and portable module for GPIO and key-input debouncing.
For traditional PIC MCUs, they still believe that multi-layered structure of software design will work well. To me, multi-layered structure of software design leads to more depth of function calls and more RAM consumption.
Multi-layered, modularized, code can work very well. But it isn't always the best concept for a lamp timer or other projects with extremely little logic. It is a concept that assumes that the program have a bit of business logic - if it hasn't, then the size is probably so small that even badly modularized programs will be easy to maintain.
Having GPIO in a module sounds like an incorrect slicing of the cake, but I might have misunderstood how it was planned. I normally create inline functions for getter/setter functions instead of trying to have a generic GPIO-read or GPIO-write.
So a project may look like:
activate_siren(); activate_relay1(); if (tamper_switch_active() { ... }
The above makes it easy to change what pins that drives different things (and how the pins are driven) or how input stimuli is retrieved. Often, the same source code is compiled for multiple hardware platforms but the header file with all the inline functions is different.
Since the inline functions may look like:
__inline void activate_siren(void) { FIO1SET = 1u << P1_SIREN; } __inline void deactivate_siren(void) { FIO1CLR = 1u << P1_SIREN; }
the efficiency is excellent.
If trying to make the full GPIO into a generic module, that module must take parameters for the requested action, and decide what to actually do. It both takes extra code and extra clock cycles, without any gain. It is more likely to get the business logic intermixed with the decisions which actual pins that are used for different things.
Code like: set_gpio_pin(GPIO_RELAY,GPIO_ON) requires the set_gpio_pin() function to figure out what port and pin to modify, and if the pin should be high and low (not trivial since some pins may require inverted logic depending on external electronics).
And code like set_gpio0_pin(GPIO0_RELAY,GPIO_ON) will save the generic function from knowing what port is involved - but will instead require that the business logic is modified if the function is moved to a pin on a different port.
Having C files that don't include a corresponding header file with the exported symbols sounds like a big mental accident. C++ can catch some problems thanks to the type safe linking. But for C, the problems will quickly be catastrophic unless a code analyzer with global processing capabilities is used.
I see the header file as a form of "contract". It contains a list of services that the C module promises to deliver. Obviously, the module itself should also be allowed to know what services it promises to deliver.
I'm not so sure about the suitability of having a generic keyboard debounce to plug into all projects. A big question is where the debounce code would get the information about time. Another is that some projects may have single buttons (ENTER, BACK, LEFT, RIGHT) connected to individual processor pins, while other projects may have a matrix keyboard where the user may hold more than one button pressed. Having fully generic code for a 4x4 keyboard would also be interesting since it then would basically have to to read and set pins one-by-one using the GPIO layer - the GPIO layer would have to be extremely advanced to support simultaneous sampling of controlling of multiple pins.
In many situations, you perform modularization by calling a standard function name, to get something done, but then have multiple implementations depending on project. This is normal way to implement serial communication - each target processor have one source file for each supported serial port, and the program just uses com0_sendchar() or com0_printf().
But trying to code something hardware-specific into a generic function either leads to lots of conditional compilation or into lots of really meaningless glue functions being created and called. How fun would it be with a generic UART driver that contains code for:
f = get_base_frequency(); idiv = get_uart_ticks_per_bit(); divisor = f / idiv / baud; error = f - divisor * idiv*baud; // might need fractional baudrate compensation set_baud_divisor(divisor,error);
Too generic will quickly explode into unmaintainable, large and inefficient solutions.
"... reducing the cohesion and data coupling."
Or:
"... increasing the cohesion and data coupling."
Either way, "someone" needs to get a clue about "proposing an idea/rule".
The C unfriendly architecture of the traditional PIC MCU makes things worse.
Although I am not a competent developer, and I am not good at explaining and illustrating, but it is quite easy to know that, people here will fail.
I am not able to do much to help people here. Because they are numerous, and in higher position.
Maybe Bill Gates or Steve Jobs can convince them, but Dennis Ritchie and Ken Thompson can NOT.
"multi-layered structure of software design leads to more depth of function calls and more RAM consumption"
Yes: like use of the programming langugage, use of a "multi-layered structure" can be done well, or done badly - it is not, of itself, a magic bullet.
Yes: adding an "abstraction" layer to help make code portable does usually add some overhead. As always, there is a tradeoff of the gain in terms of developer/maintainer performance against any overheads on the target.
The criteria for "optimisation" need to include not only the "costs" of code size, RAM size, and execution speed - but also the developer/maintainer costs...
It is not uncommon that different parts of the code will need different balances in this tradeoff...
Writing modular code is something that is done to help reduce the complexity and/or improving code reuse.
That obviously requires a very light hand. Prio 1 is to analyze the situation. Then a suitable solutino can be suggested.
That means that a company can't just produce a document telling how people should modularize a program. The process must include the project team based on the needs of the project and the available resources.
Having iron-hard rules doesn't help the project team - except that they are allowed to turn off their brains and hack lousy "by-the-book" code without having to care about how well it works. The goal should be to write economical and well-working code, not a 25-layer precursor to Eddie, the shipboard computer from The Hitchhikers Guide to the Galaxy. It just may end up more like HAL from 2001.
Is that the latest Arduino spin-off...?!
;-)
"multi-layered structure of software design leads to more depth of function calls and more RAM consumption."
absolutely true.
that's why we live, unfortunately, in a world where people are paid big $$$$$ to make the right compromise.
engineering a non-compromised design is simple - because you will never get it done.
it is engineering a compromised design that is incredibly hard.
"Too generic will quickly explode into unmaintainable, large and inefficient solutions."
because being unmaintainable, large and inefficient is how you define something to be too generic. so the above sentence is meaningless.
there is always a degree of "optimization" or compromise here. the fact that an approach, when pushed to an extreme, can be unmaintainable, large and inefficient doesn't mean that that approach is unmaintainable, large and inefficient.
it only means that you have made a poor compromise. so in that case, blame the person(s) pushing that approach to be unmaintainable, large and inefficient, not that approach.
"I am not able to do much to help people here. Because they are numerous, and in higher position."
given enough time, they will move up the value chain and learn how to make the right compromises.
hopefully, in the mean time, we have moved onto something more difficult and more value-added.
"Writing modular code is something that is done to help reduce the complexity and/or improving code reuse."
in my case, it is primarily to reduce labor cost and reduce time to market, by providing high quality code that has been debugged before.
it is, from that point of view, a preservation / institutionalization of our prior investment in debugging.
"Too generic will quickly explode into unmaintainable, large and inefficient solutions." because being unmaintainable, large and inefficient is how you define something to be too generic. so the above sentence is meaningless.
You just turned the direction on an implication arrow, or upgraded a one-way relation into a two-way equivalence.
A too general solution gets unmaintainable, large and inefficient.
But a unmaintainable code block need not be too general, or even general at all. A large code block need not be too general, or even general at all. A inefficient block need not be too general, or even general at all.
So, in short, your summary that the above sentence is meaningless is based on erroneous logic.
=> does not imply <= or <=>
because being unmaintainable, large and inefficient is how you define something to be too generic
Well well, this is a rather artistic license to define "too generic". Why "unmaintainable" ? Why "inefficient" ? And why "large"?
None of the above _necessarily_ true.
"A too general solution gets unmaintainable, large and inefficient."
I am not sure what a too "general" solution is: I thought we were talking about code being too "generic" (not "general").
if you object to my approach, then what do you mean by "too generic"?
To achieve the modularizing, someone proposed an idea/rule for C programming, where fileA.c are not allowed to include fileA.h,
That means you desperately need of a source of better "someones".
he claims this is for reducing the cohesion and data coupling
That claim is pure and utter nonsense. The module's own interface header is exactly the single header file whose inclusion will not have any effect on cohesion or cross-coupling. Every other include is what does that.