This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Code portability

Hello,
I was browsing through older posts that deal with the painful issue of portability (http://www.keil.com/forum/docs/thread8109.asp). I was (and still am) a big advocate of programming as much as possible conforming to the C standard, and having a layered structure that allowed "plugging-in" other hardware. But I have come to change my mind recently. I am reading the "ARM system developer's guide" (excellent book by the way. I'm reading it because I want to port some C167 code to an ARM9 environment) in which chapter 5 discusses writing efficient C code for an ARM. The point is, and it is fairly demonstrated, that even common, innocent looking C code can either be efficient of very inefficient on an ARM depending on specific choices made, let alone another processor used! So, if we are talking about squeezing every clock cycle out of a microcontroller - I do not believe that portability without ultimately littering the code is possible!

Parents Reply Children
  • again if we are discussing "large" (embedded) the statement is valid, for small embedded it is a joke.

    I work primarily with small embedded and I can assure you that portability is not a joke.
    portability between WHAT? processors? compilers? houses?
    a lot of code is "automatically portable" e.g. a mathematical function and for small embedded anything beyond a "computing function" will take more effort to make portable that to port as non-portable.

    re buggy pseudo-code who cares
    Says it all, really.

    when quoting me, please quote fully

    "re buggy pseudo-code who cares, it did show what I ment.

    Erik

  • mista mikal,

    you is being proudly of tamer is you being yes?????

    you be started warrring of erac and jak agin!!!!!!

    you be hanging marow on his string for the mans to atack the fight yes????

  • Kalib,
    The only thing I see is a well-argumented, intelligent discussion, from which there is a lot to learn. War? I don't think so. Disagreement maybe, but that is the foundation for any progress!

  • when I say "portability is a joke" I do not say "porting is a joke".
    My basic reference is that, in the rare instance when needed, porting "non-portable code" takes far less resources than making code portable (most often code that will never be ported).
    With an intelligent editor (I use CodeWright) I can port a substantial chunk of non-portable code in a few hours, making the code portable would take days.

    Erik

  • Wow, amazing how this thread grew. I shouldn't go away for a "3.5-day" 'weekend' (remind me not to get sick again). I point this out because reading what I missed did expose the need to keep all comments 'at the bottom' since all of these posts were 'new' to me... "Given that new posts are highlighted I don't see how that would offer any real
    benefit"

    Ditto to the posts above. (most of them)

    ALL embedded code--and an argument can be made for all code--will have non- Porta-Code constructs. Typically the smaller the processor power, the more the PortaCode suffers. Hence, an over-clocked hyper-extreme Octa-Core ubercomputer ALMOST has the luxury of implementing the purest of standardized language programming compliance, while your C compiler for the 1802 COULD take on some contorted coding.

    Much of 'it' depends upon your requirements.

    Generally what I mean by smaller processor power includes restricted (under-specified might be a better description) resources like program space, data space, MIPS, FLOPS, core-processor capabilities (e.g. no intrinsic 'divide' instruction), and/or requirements that have grown beyond the original design.

    But it still goes beyond that. Just because you think you have an under-powered system doesn't mean that it needs to change. A change could spell the doom of the project, so you shall need to squeeze those CPU cycles (period). So much of the real-world applications pressure the software to make up for the deficiencies in system design, mechanical design, and electrical design. Sometimes it is the correct location to apply that pressure and sometimes its not.

    The 'doom' of the project could be a simple parts-cost increase that will force your product to cost more, hence won't sell, hence won't need to be made, hence your company folds, and hence you will get the opportunity to find out how 'generous' your company really is.

    Or,it can mean that your product now needs to get re-qualified with the 'new' design and that requires far more than can make the project viable, and hence ... (see the hence list above).

    Or, this sudden realization that your processor is under-powered could cause you to lose the window of opportunity to have a product:

    You are responsible for the new Indy 500 race-car telemetry system, and the re-spin of the board will cause the contract to loose one week and with the help of the team's over-time, the product will be ready exactly 2 business days after race-day. For some reason, the customer will not pay you. Or that Shuttle Launch, or aunt Sally's annual bridge game and her dazzling LED intermission light-show.

    In order to determine if the real problem is an under-powered system or something else, you must look at the bigger picture than just the computing power of the CPU and the efficiency of the compiler, but also the other 'intangibles' from a management perspective: unit cost, project cost, maintenance costs, long and short term IP value, and resource costs. Not to mention the other impacts on a system's performance like how long your LiPo battery powered underwater fish tracker will last using software driven floating point 8051 algorithms or that floating point Ti DSP micro.

    This is why "Per Westermark", "erik malund" and "Tamir Michael" are 100% right (in all of their posts). You must design your systems with an eye for expansion in both hardware capability and software capability. The most capable software --for expansion/portability/etc-- complies with a standardized language as both the tool vendors and CPU architects know that their products will be implementing a standard language. Stick to it as best you can, but don't fool yourself, you SHALL be deviating from the purest (most portable?) code.

    And when you do, you not only have to be smart enough to know it (e.g. knowing that a count-down is quicker than a count-up), but you must be diligent enough to make note of it for when that 'new-guy' takes over the code.

    The compiler [and even chip] vendors know that deviations to a standard shall take place, and they try their best to make as many of the non-standardized (and thus non-portable) implementations as logical a possible, and hopefully with minimal impacts (the origion of #pragma comes to mind).

    Yes, these vendors want their competitor's customers to easily switch to their tools/chips, so they'll attempt to make it as painless as possible. Sticking as close to a standard helps force the focus onto quality, features, and price... just like you/we should be doing.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

    P.S. erik,

    WordStar, to Brief (by Underware), to CodeWrite (using Brief Emulation). I've used 'other' editors, but I'm totally hooked on CodeWrite too (alas, only V6.0c though). Its nice to see I'm not the only CodeWrite addict out there.

  • WordStar, to Brief (by Underware), to CodeWrite (using Brief Emulation).
    I do believe you mkean CodeWright

    there is, I believe, also an editor named CodeWrite.

    Erik

  • Doh! Yup. I mkeant "CodeWright." Don't you just hate that?

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

  • This is why "Per Westermark", "erik malund" and "Tamir Michael" are 100% right (in all of their posts).

    In all of their posts?

    Wow, how much higher praise could one person give to others?

    The three masters are all totally infullable.

    Whoops I, of course, mean infallable.

    And don't Jack Sprat, Christoph Franck or the numerous other people warrant a mention of some sort?

  • 1) I am sure Vince referred to this thread only.
    2) the fact is that much 'discussion' comes from viewpoint differences

    Erik

  • Johnathan,
    Alright, I am prepared to share some of my glory with you :-)

  • And don't Jack Sprat, Christoph Franck or the numerous other people warrant a mention of some sort?

    Of course. And they do it because they are on a mission, or they enjoy embedded programming or feel a responsibility toward younger, less experienced developers. Or maybe it is something else. After all, what is point in knowing if you don't share it? I am sure each and everyone here has a good reason to make his own worthy contributions to this forum. Let's keep up the good work!

  • My apologies.

    I wanted to point out how those people, in particular, were addressing the exact issues I was bleating on about, but they were doing it in a different way that I was.

    If you look at the quantity of those three versus others you'll find that 'the others' weren't the heavy hitters here (not that those people aren't heavy hitters, just not on this thread). This forum has some very good regular contributors among the fray of 'students' and the general OMG! Class of participant.

    And yes, I was talking about this thread and not their life's work on this forum.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

  • Several heated debates, but no real "wrong" opinions.

    I think the three (?) debates in this thread can be milled down to something like the following:

    Writing portable code costs extra time.
    Writing non-portable code costs extra time.

    Optimizing for a processor you don't have doesn't give anything.
    Optimizing for your specific processor means that you may be locked into a deficient design.

    Selecting too big processor can kill a project.
    Selecting too small processor can kill a project.

    In the end, it is a question of view, and of company policies, and volume of the product etc. The only thing that is known is that portability is not simple.

    A small very partial list of stumbling stones, that can make "portable" hard to really manage:
    - Unix, Win32 or .NET.
    - 8-bit or 64-bit.
    - Little-endian or big-endian.
    - 8-bit or 9-bit characters.
    - Two-complement integers or sign bit.
    - von Neumann or Harward architecture.
    - uA batteries consumption or mains-powered.
    - Single-instruction or vector-based instructions.
    ...

    The target environments and hardware can vary so much that there can never be "one" solution. It is so easy to think that a program can always be written in a portable way - but how fun is it really to have an algorithm that works with 36 bit integers and 9-bit characters? How many trivial lines of code would break? Anyone done (x & 0xff) and assumed a specific behaviour?

    Since we can't really write the ultimate portable program, we can argue, and debate and sometimes fight a bit about what parts of "portable" that are best to think about, and exactly what weight we should give to the problem, compared to all other design decisions we have to make to manage to produce a working and sellable product that people actually want to buy.

    With so varying architectures (just compare C51 to an ARM7...) it is hard to discuss specifics. An ARM chip can use a recursive algorithm, and the algorithm can be very easy to read, understand and maintain. And it would be portable to just about any processor in this world. But it would be a lousy solution for a C51 user.

    But on the other hand - writing a C51 program without taking advantage of bit operations would be the ultimate stupidity, since it would waste one of the main reasons for a C51 processor.

    Since there are no single correct solution, the bottom line is to think and make educated decisions. And to document them - what the decision was, and (quite important!) why it was taken.

    And always remember that the price of a chip is not proportional to speed or number of supported peripherial devices. Fighting to fit everything into a tiny processor should be done because you can make 10 cent extra times 100k units, not because someone made an oops when selecting the processor.

    Alas, too many products are victims of the oops design methodology...

    Anyone read any threads where the OP is doing a "real" commercial job, but don't know how to read the processor documentation? Anyone think they selected the correct processor if they didn't read the documentation before selecting it?

  • Per,
    You wrote: Anyone think they selected the correct processor if they didn't read the documentation before selecting it?

    these thing do occur, alas, as we all know. A couple of overseas managers are now considering switching to Linux because it seems to support some powerful processors that Windows CE does not support (yet). The problem is the performance of a system that sports a powerful MX processor. They don't even consider revising the software, which needs
    a) rewritng by people who know what they are doing
    b) careful, controlled maintenance

    but instead they plan to replace the RTOS and change the prcorssor, without fully understanding the costs in terms of compatibility and maintenance. Again - shocking.

  • Tamir Michael wrote: "They don't even consider revising the software [...]"

    I feel for you. Political decisions can be even worse tha uninformed decisions made by developers.

    When "management" makes a political decisions, a number of people who know the problems will then have to "make do" and try to make the best of the situation. Foolish developers who make bad decisions because they don't read documentation or plan their design in advance will at least have to suffer the consequences themselves.