Hello,
See here:
www.open-std.org/.../C99RationaleV5.10.pdf
Is it possible that "Jack Sprat", the staunch defender of the C standard as the ultimate reference when writing programs, missed the following statement?
C code can be non-portable. Although it strove to give programmers the opportunity to write truly portable programs, the C89 Committee did not want to force programmers into writing portably, to preclude the use of C as a “high-level assembler”: the ability to write machine- 35 specific code is one of the strengths of C. It is this principle which largely motivates drawing the distinction between strictly conforming program and conforming program (§4).
this is precisely what Per Westermark has been saying. Exactly what Erik Malund has been saying. Remember: Jack Sprat claims often that writing a program that complies with the C standard is a GUARANTEE for its correct functioning.
"The operation of the above program will be in accordnace with the 'C' standard - but that will not be what the programmer intended."
this is why we need a standard: it assures the operations / behaviors will be the same regardless of which hardware the code runs on.
but it doesn't assure the correct results on all hardware. It is up to the programmer to make sure that's the case.
Something so primitive that apparently the OP doesn't understand.
The 'C' standard specifies some things as implementation-defined and some things as un-defined
Therefore, the operations / behaviours will be the same only so far as they do not rely upon un-defined or differing implementation-defined aspects...
You do realize that you just sucked this out of your thumb, don't you?
Thanks Andy.
Ashley: Do refer to my "thumb" statement above.
the beauty of C vs. assembly is its portability, in my view. and that's very important from a software point of view.
assembly (machine coding) can do everything C can do. However, it requires much higher hurdle on human capital, it requires more time for a project, and it is harder to reuse the code from one assembly to another, or to move the code from one chip to another.
What portability allows you to do is to take code developed and debugged on one project / chip and reuse the code on a new project. As you develop more and more in C, you will have built up a large library of code that can be ported to the next project, that you think with high/higher degree of confidence that it will work once plug'd in.
this becomes a huge competitive advantage for a software vendor: it allows you to develop a product sooner, cheaper, and with higher quality.
if you develop your code with portability in mind.
It depends what you mean by "portability". Code developed for a 32-bit machine, littering some of it with preprocessor statements assuring it runs "correctly" on a 8-bit machine is not portable in my opinion. In think true portability is much more an issue at larger scale: software constructs conceived to achieve a goal, rather than the detailed code itself. Or in other words: the interface/algorithm, not the implementation details themselves (even that might not be portable, considering performance issues). I was lucky enough to have to port only from a LPC2478 to LPC17xx/18xx machines - but that is a no-brainer: even the peripherals are similar.
That is a very, very, very big "if"!!
As Tamir's original quotation said (my emphasis added),
"C code can be non-portable. Although it strove to give programmers the opportunity to write truly portable programs, the C89 Committee did not want to force programmers into writing portably"
It is a common misapprehension that the mere fact of using 'C' - in and of itself - will inherently make your programs portable. It will not
Just as the mere fact of using Assembly - in and of itself - will not just magically make your code fast & compact.
"this is why we need a standard: it assures the operations / behaviors will be the same regardless of which hardware the code runs on."
But this isn't part of the scope of the C standard, or of the scope of C.
Actual behaviour of program must take into accounts lots of extras, such as (very incomplete list): - the size of the heap - will the program manage to allocate the required amount of memory on the target hardware when in expected target machine state? How will the specific implementation of the heap behave in regards to fragmentation? - allocation sizes - what is the largest continuous block that can be allocated from the heap? - array indexing - what is the largest memory block that can be indexed as an array? What is the largest index allowed? - execution speed - will the program manage to react fast enough to a signal? Will it be able to push enough number of characters/second trough a "standard out"? Will it manage to perform a computation and emit the answer before a new input arrives? - numeric range of data types. Not all processors have same word size. Not all processors even make the same decisions for small/int/large even if same native accumulator and/or register sizes. a machine don't need to have two-complement integers in 2^n-sized data types. There exists 6-bit and 9-bit characters and 36-bit ints. How to write an embedded program if the compiler defines the char type as 16 bit large, but the processor have 8-bit SFR side-by-side with no padding? - capability of stack. A perfectly written recursive expression evaluator may not work on all platforms just because some expressions results in too deep recursion. - self-modifyability. Function pointers are pointers. Some targets can do memcpy() with a function pointer to a new address and allow that new address to be used as a function call. So C code can duplicate a function into RAM before a code flash is erased. Some architectures can't run code from any R/W memory area. Some compilers/linkers have helper tools to link a function into code space for duplication into RAM space. - ability to handle or produce unaligned or packed data.
The language standard does not assure same behaviour for all C programs or even for all valid C programs. It just tells the developer that within a given envelope, the program will behave with strict compatibility. Outside that envelope, the developer will either be totally on his own, or will have to rely on the compiler vendors notes about target-specific limitations/design choices.
In embedded development, the target hardware often have a limited size. So the developer will have to write lots of comments and/or documents about made assumptions and about required tests if the code is moved to other hardware. Evaluations that must be tested with worst-case parameters to verify there is no overflow/underflow/division by zero. Time-critical code that must be verified with a profiler or an oscilloscope.
Design-by-contract can be a nice development strategy. It's too bad that embedded targets often don't have the available code and/or RAM space for running debug builds with contracts validation code included.
"That is a very, very, very big "if"!!"
absolutely. for this strategy to yield fruit, the code will need to be developed with portability in mind.
just because you are using C doesn't mean you will write portable code.
no argument there whatsoever.
"In embedded development, the target hardware often have a limited size. So the developer will have to write lots of comments and/or documents about made assumptions and about required tests if the code is moved to other hardware."
agreed.
that's why writing portable code is hard and difficult to understand for many.
"In embedded development, the target hardware often have a limited size. So the developer will have to write lots of comments and/or documents about made assumptions and about required tests if the code is to be moved to other hardware."
I inserted two words in the quote to make the point. what is stated is a thing that must be done at the conception of the code for the original processor.
anyone ever been told at conception that the code would be moved?
Erik
'what is stated is a thing that must be done at the conception of the code for the original processor.'
it doesn't have to be done at the conception; but it is best done at the conception.
there are different kinds of portability:
1) across platform portability: many tasks are non-hardware specific, like doing fft, for example (without using hardware). a pid library for example would be another good example here.
2) across family portability: some tasks are hardware specific to a family of chips. their portability is likely limited to the interface - you always call i2c_send() to send a byte over i2c, but different platforms may have their own ways of performing that task. etc. those things that are specific to that family / hardware obviously isn't portable and have to be recreated on a new hardware. But portability insulates the higher layers from being (materially) rewritten.
then there are pieces of code that are not going to be portable regardless of what you do. part of our job is to minimize that portion.
'anyone ever been told at conception that the code would be moved?'
many times over.
That documentation is needed always - it really doesn't matter if you plan for changing processor or not.
It is quite common that a project starts with a processor running at 8MHz and a couple of years later is moved to a "identical" processor in the same family that runs at 16MHz while consuming half the power. The change was just that the newer processor costed less.
Assumptions must always be documented as well as can be done, whatever the expectations about processor changes at a later time. It might just instead be an external chip that needs to be replaced because the original chip can't be bought. Or maybe there is a need to step up a baudrate on an interface because the transfer time represents extra cost at the factory or is needed because the device is intended to be used with another device that have started to support a higher baudrate. At low baudrates, it may be enough to round to closest divisor for the baudrate generator, so an assumption may have been made that no fractional baudrate compensation is needed. This obviously must also be documented in case there is a need to introduce new baudrates, since a smaller divisor value means larger granularity between each value.
It's almost impossible to know what may happen to a product during the full life cycle. Only by having the developer thinking about the assumptions he makes directly when the code is written and originally validated and have him directly document these assumptions/validations, can you be reasonably sure that you have a reasonably well-documented source code.
I'm reusing code today that I wrote 15 years ago or more. Besides being debugged from a "C" standpoint, I also know that there is a good documentation of the boundaries of the code.
When looking at code on the net, comments are often missing. And if the code has comments, it's quite often just more or less a redundant description of what the code statements does. Sometimes with a description of each input parameter and what results a module produces. But almost never is code documented with assumptions and boundaries. What numeric range is safe (mathematically or explicitly tested) for input parameters? What resources does the code assume it may consume? What assumptions about data types? What assumptions about reentrancy?
As your microcontrollers gets larger and larger, we can start solving bigger and bigger problems. For many designs, that means that a larger and larger percentage of the processor capacity is spent in business logic instead of doing I/O. Especially since new processors have better and better hardware acceleration for peripherials. This allows a larger percentage of products to separate the code into a hw layer and a business logic layer without wasting significant amount of time doing extra function calls.
In the end, we get more and more embedded devices that has 10k, 100k or 1M source lines or maybe even more. The time invested in the source code gets larger and larger. And the code just have to be moved to newer platforms as technology improves, since the invested costs are so high because of the complexity of the problems being solved.
A tiny lamp timer can have 100% of the code rewritten if you have a need to move to a different processor for improved cost efficiency or if the original processor is no longer available. Larger projects may live for 10 years or more, and having to move to new hardware every 6-12 months to be able to constantly increase the production volumes while at the same time dropping the production costs.
This means that we embedded developers must constantly try to improve how we work, because the scope of our projects are getting larger and larger. At the same time, the cost of salaries are constantly increasing, making it harder to develop new products in a competitive way. Low cost countries have cheaper labour, but often developers without the know-how about a specific market niche. But if the development is outsourced, they will after a couple of years get the know-how, while the company ordering the development will lose their know-how (and soon their market).
So - a long rant, but the end result is that we don't know about the future so we must document our assumptions and required validation tests (and do our best to see how source code can be layered), being prepared for changes if we want to stay in business.
we see very few 8-bit jobs now and we offer 8-bit capabilities primarily for marketing / one-stop purposes. the 8-bit market isn't important to us because of low demand and abundance of low-overseas' capabilities (hardware + software) so we aren't competitive there.
most of our business comes from the 32-bit market. the overseas programmers may be cheaper but they have not mastered the right way of doing large / complex projects. language, as in communications / documentations, is another big hurdle that in my view renders the overseas shops non-competitive in this segment.
"being prepared for changes if we want to stay in business."
this is where structuring your code to be portable has a huge advantage. As the hardware changes, you are able to take out the relevant part and plug in the new hardware-specific portion of your code and you are ready to go with a project from day 1.
that would be a huge competitive advantage and will help you offset any cost advantage your overseas competitors may have.
the overseas programmers may be cheaper but they have not mastered the right way of doing large / complex projects. language, as in communications / documentations, is another big hurdle that in my view renders the overseas shops non-competitive in this segment.
While the local competitors are OOXX Level n qualified, the company I am currently working for, is a OOXX Level n+1 qualified (Supposed to be better). (I come from Taiwan.)
People here are talking about modularizing, the modularizing is for code reusability and portability. Where the code reusability and portability are aiming at 8-bits/16-bits MCUs like PIC Microcontrollers and others, but 32-bits MCUs may also be included.
To achieve the modularizing, someone proposed an idea/rule for C programming, where fileA.c are not allowed to include fileA.h, he claims this is for reducing the cohesion and data coupling. I have no choice but to try to stop him.
I also encountered a lot of other amazing stuff here.