This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

What does the C standard say about portability?

Hello,

See here:

www.open-std.org/.../C99RationaleV5.10.pdf

Is it possible that "Jack Sprat", the staunch defender of the C standard as the ultimate reference when writing programs, missed the following statement?

C code can be non-portable.  Although it strove to give programmers the opportunity to write
truly portable programs, the C89 Committee did not want to force programmers into writing
portably, to preclude the use of C as a “high-level assembler”:  the ability to write machine-
35  specific code is one of the strengths of C.  It is this principle which largely motivates drawing the
distinction between strictly conforming program and conforming program (§4).

this is precisely what Per Westermark has been saying.
Exactly what Erik Malund has been saying.
Remember: Jack Sprat claims often that writing a program that complies with the C standard is a GUARANTEE for its correct functioning.

  • writing a program that complies with the C standard is a GUARANTEE for its correct functioning

    It is easy to disprove this statement by providing an example. Say, write a program that multiplies two numbers using the type int and have it fail on 16-bit machines due to overflow. You see, it's easy :-)

  • "writing a program that complies with the C standard is a GUARANTEE for its correct functioning"

    That depends on what you mean by "correct".

    It will be correct in accordance with the 'C' standard - the problem is usually that this is not the behaviour that the programmer wanted.

    The classic example is:

    if( x = 1 )
    {
       // Programmer wants this done when x has the value 1
    }
    else
    {
       // Programmer wants this done when x has values other than 1
    }
    

    The operation of the above program will be in accordnace with the 'C' standard - but that will not be what the programmer intended.

    The fault is with the programmer.

  • "The operation of the above program will be in accordnace with the 'C' standard - but that will not be what the programmer intended."

    this is why we need a standard: it assures the operations / behaviors will be the same regardless of which hardware the code runs on.

    but it doesn't assure the correct results on all hardware. It is up to the programmer to make sure that's the case.

    Something so primitive that apparently the OP doesn't understand.

  • The 'C' standard specifies some things as implementation-defined and some things as un-defined

    Therefore, the operations / behaviours will be the same only so far as they do not rely upon un-defined or differing implementation-defined aspects...

  • this is why we need a standard: it assures the operations / behaviors will be the same regardless of which hardware the code runs on.

    You do realize that you just sucked this out of your thumb, don't you?

  • Therefore, the operations / behaviours will be the same only so far as they do not rely upon un-defined or differing implementation-defined aspects...

    Thanks Andy.

    Ashley: Do refer to my "thumb" statement above.

  • the beauty of C vs. assembly is its portability, in my view. and that's very important from a software point of view.

    assembly (machine coding) can do everything C can do. However, it requires much higher hurdle on human capital, it requires more time for a project, and it is harder to reuse the code from one assembly to another, or to move the code from one chip to another.

    What portability allows you to do is to take code developed and debugged on one project / chip and reuse the code on a new project. As you develop more and more in C, you will have built up a large library of code that can be ported to the next project, that you think with high/higher degree of confidence that it will work once plug'd in.

    this becomes a huge competitive advantage for a software vendor: it allows you to develop a product sooner, cheaper, and with higher quality.

    if you develop your code with portability in mind.

  • It depends what you mean by "portability".
    Code developed for a 32-bit machine, littering some of it with preprocessor statements assuring it runs "correctly" on a 8-bit machine is not portable in my opinion.
    In think true portability is much more an issue at larger scale: software constructs conceived to achieve a goal, rather than the detailed code itself. Or in other words: the interface/algorithm, not the implementation details themselves (even that might not be portable, considering performance issues). I was lucky enough to have to port only from a LPC2478 to LPC17xx/18xx machines - but that is a no-brainer: even the peripherals are similar.

  • That is a very, very, very big "if"!!

    As Tamir's original quotation said (my emphasis added),

    "C code can be non-portable. Although it strove to give programmers the opportunity to write truly portable programs, the C89 Committee did not want to force programmers into writing portably"

    It is a common misapprehension that the mere fact of using 'C' - in and of itself - will inherently make your programs portable. It will not

    Just as the mere fact of using Assembly - in and of itself - will not just magically make your code fast & compact.

  • "this is why we need a standard: it assures the operations / behaviors will be the same regardless of which hardware the code runs on."

    But this isn't part of the scope of the C standard, or of the scope of C.

    Actual behaviour of program must take into accounts lots of extras, such as (very incomplete list):
    - the size of the heap - will the program manage to allocate the required amount of memory on the target hardware when in expected target machine state? How will the specific implementation of the heap behave in regards to fragmentation?
    - allocation sizes - what is the largest continuous block that can be allocated from the heap?
    - array indexing - what is the largest memory block that can be indexed as an array? What is the largest index allowed?
    - execution speed - will the program manage to react fast enough to a signal? Will it be able to push enough number of characters/second trough a "standard out"? Will it manage to perform a computation and emit the answer before a new input arrives?
    - numeric range of data types. Not all processors have same word size. Not all processors even make the same decisions for small/int/large even if same native accumulator and/or register sizes. a machine don't need to have two-complement integers in 2^n-sized data types. There exists 6-bit and 9-bit characters and 36-bit ints. How to write an embedded program if the compiler defines the char type as 16 bit large, but the processor have 8-bit SFR side-by-side with no padding?
    - capability of stack. A perfectly written recursive expression evaluator may not work on all platforms just because some expressions results in too deep recursion.
    - self-modifyability. Function pointers are pointers. Some targets can do memcpy() with a function pointer to a new address and allow that new address to be used as a function call. So C code can duplicate a function into RAM before a code flash is erased. Some architectures can't run code from any R/W memory area. Some compilers/linkers have helper tools to link a function into code space for duplication into RAM space.
    - ability to handle or produce unaligned or packed data.

    The language standard does not assure same behaviour for all C programs or even for all valid C programs. It just tells the developer that within a given envelope, the program will behave with strict compatibility. Outside that envelope, the developer will either be totally on his own, or will have to rely on the compiler vendors notes about target-specific limitations/design choices.

    In embedded development, the target hardware often have a limited size. So the developer will have to write lots of comments and/or documents about made assumptions and about required tests if the code is moved to other hardware. Evaluations that must be tested with worst-case parameters to verify there is no overflow/underflow/division by zero. Time-critical code that must be verified with a profiler or an oscilloscope.

    Design-by-contract can be a nice development strategy. It's too bad that embedded targets often don't have the available code and/or RAM space for running debug builds with contracts validation code included.

  • "That is a very, very, very big "if"!!"

    absolutely. for this strategy to yield fruit, the code will need to be developed with portability in mind.

    just because you are using C doesn't mean you will write portable code.

    no argument there whatsoever.

  • "In embedded development, the target hardware often have a limited size. So the developer will have to write lots of comments and/or documents about made assumptions and about required tests if the code is moved to other hardware."

    agreed.

    that's why writing portable code is hard and difficult to understand for many.

  • "In embedded development, the target hardware often have a limited size. So the developer will have to write lots of comments and/or documents about made assumptions and about required tests if the code is to be moved to other hardware."

    I inserted two words in the quote to make the point.
    what is stated is a thing that must be done at the conception of the code for the original processor.

    anyone ever been told at conception that the code would be moved?

    Erik

  • 'what is stated is a thing that must be done at the conception of the code for the original processor.'

    it doesn't have to be done at the conception; but it is best done at the conception.

    there are different kinds of portability:

    1) across platform portability: many tasks are non-hardware specific, like doing fft, for example (without using hardware). a pid library for example would be another good example here.

    2) across family portability: some tasks are hardware specific to a family of chips. their portability is likely limited to the interface - you always call i2c_send() to send a byte over i2c, but different platforms may have their own ways of performing that task. etc. those things that are specific to that family / hardware obviously isn't portable and have to be recreated on a new hardware. But portability insulates the higher layers from being (materially) rewritten.

    then there are pieces of code that are not going to be portable regardless of what you do. part of our job is to minimize that portion.

  • 'anyone ever been told at conception that the code would be moved?'

    many times over.