This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Code portability

Hello,
I was browsing through older posts that deal with the painful issue of portability (http://www.keil.com/forum/docs/thread8109.asp). I was (and still am) a big advocate of programming as much as possible conforming to the C standard, and having a layered structure that allowed "plugging-in" other hardware. But I have come to change my mind recently. I am reading the "ARM system developer's guide" (excellent book by the way. I'm reading it because I want to port some C167 code to an ARM9 environment) in which chapter 5 discusses writing efficient C code for an ARM. The point is, and it is fairly demonstrated, that even common, innocent looking C code can either be efficient of very inefficient on an ARM depending on specific choices made, let alone another processor used! So, if we are talking about squeezing every clock cycle out of a microcontroller - I do not believe that portability without ultimately littering the code is possible!

Parents Reply Children
  • Better companies tries to use "build servers" whenever possible, where all software is built on a reference platform.

    A compiler upgrade is then performed on another machine or in a sand box. Then a product is built with the new tool, followed by some extensive tests to verify that all test cases (and there should be a large set of documented and preferably automatically testable cases) produces the expected result.

    First then may there be a decision that a specific product should switch to the new compiler version. But that decision will still only affect a specific application, or new applications that are early in their development cycle.

    Each and every mature product should then be migrated one-by-one after proper validation.

    After all products has been migrated, the original build environment should still be kept in working order, in case there is a need to jump back to an old software version and produce a branch with a fix for some detected problem and where the customer refuses to make a big upgrade to the latest and greatest version.

    For some products, the customer really do reqruie that they get bug fixes for the version they are running, instead of having to upgrade to a newer version that they haven't tested and accepted. And the acceptance tests can take very long time - many months or more. Obviously the bug fixes should be built with exactly the build environment that was used for the original release.

    Virtual machines can be very valuable when it comes to build environments, since a single machine can then host a number of perfect sand-boxed installations, all left in pristine order. Not only that, but the virtual machines can be saved and later restored to newer hardware without breaking the installation. With a native installation, you loose your reference build machine if the motherboard dies and is too old to be replaceable.

  • "Do you think that what the help desk did was stupid [...]"

    Are you sure that it was the help desk that attacked your build environment? A number of other words comes to my mind...

    I would say that the company has a management problem if any changes to the build environment isn't synchronized with a contact person for each and every project that will be (or at least may be) affected.

    A vice president would be quite mad if his electronic diary is suddenly off-line or upgraded to a different program without 100% synchronization of all entered information - and 100% compatible with the local copy stored on his laptop.

    A mechanic who finds that some of his wrenches has suddenly been replaced with metric instead of inch (or the reverse) size steps would take his heaviest sledge hammer and go find the culprit.

  • The only reason I can see for "changing tool vendors" would be that you made a mistake initially and bought bug-ridden tools.

    For the 8051 I've used Intel, Archmedes, Avocet, IAR, Whitesmith, BSO-Tasking, etc, etc, and Keil assemblers/linkers/compilers. (Yes, I know they're all practically derivative works from each other). So I'm familiar with variations in implementations of the "C" standard (and even then, "C" was evolving). So yes, they were all bug-ridden tools and we [the industry] corrected that mistake as the tools got better, companies changed ownership, and technology improved. (Just a guess, but I'll bet that pattern will continue in the future).

    Other reasons include:

    Your company might decide to change tool suites for bizzare reasons like the new tool set comes with a neat-o key-fob the boss wants so he mandates that the new tool IDE will now be Crapoli Version 1.03 by Off-Brand Corp.

    Or, you don't want to be forced to find that old Keil uVision 1.03 compiler in order to add a small feature to an old product.

    Or, your company gets bought-out by Big-Fish Incorporated, and they have the new wiz-bang mega-buck tool set. Your code now needs to comply.

    Or, you are doing work for the government and they want to make sure that it can be re-created ten or twenty years from now. And they don't want to resurrect Archimedes Inc just to recompile the code.

    Or your product, and the associated code, might be sold to a company that uses XYZ and not your company's ABC.

    Or, if you do contract work, company X might expect your source code to work on their XYZ compiler... but you use the ABC's compiler at home.

    Or, you are a veteran engineer who has a library of tried-n-true code snippets that are qualified to the nth degree, and you want to be able to use them even though ARM was bought-out by Altium and Keil was summarily killed because it conflicted with their IAR products. Altium then gets purchased by the newest CAD mega-giant CEIBO, but CEIBO becomes merely a pawn in the Microsoft vs Apple war of 2012. The US government wins that war by seizing all the assets of both companies and now, by law, you are compiling your code using Oracle Corporation's UberBloat "C" IDE. You never know what is going to happen... code defensively.

    If you have project 'x' running with tools 'a' and switch to tools 'b' what is hindering that you maintain 'x' with a'

    If you ensure that the development tools are also archived with each revision of code, then you shouldn't have a problem with 'x' and 'a' but since that is not always the case, nor is it always do-able because of licensing problems, I would say that time hinders it.

    erik, I do agree with you on the "udder stupidity" of the 'help-desk'!

    They don't get the engineering environment like we do. The word processor's upgrade will still read those old documents just dandily, why won't the IDE also be able to read the ".C" files? Remember, they're not engineers for a reason.

    LOL-- the mechanic finds the culprit: root cause fixed and documented with a chalk outline.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

  • Per,
    I meant the "IT department" instead of "Help desk", of course.

  • Do you think that what the help desk did was stupid, or that my statement was stupid? I don't think so! If you mean the later, well, good luck to you in verifying your software consistency and your test plan!
    what the help desk did was stupid, I thought that was crystal clear.

    there is an expression "do not change horses in the middle of the stream"

    Erik

  • Vince,
    what you write is very true, but to make code 'portable' for that laundry list would make the portability effort far exceed the effort put in the working code.

    I have been through a compiler change ot two and, in each case, said to myself, thank heaven for CodeWright.

    A port (of non-portable code) is relatively painless when you have an intelligent editor.

    However, there are some things that both make coding and porting easier such as, for example, the small example of mine below.

    Erik

                                                // pointer in data in
    #define U8DI  unsigned char   idata * data  // data       idata
    #define U8DX  unsigned char   xdata * data  // data       xdata
    #define U8IX  unsigned char   xdata * idata // idata      xdata
    #define U8XX  unsigned char   xdata * xdata // xdata      xdata
    #define U8IC  unsigned char   code  * idata // idata      code
    #define U8DC  unsigned char   code  * data  // data       code
    #define U8XC  unsigned char   code  * xdata // xdata      code
    #define U8CC  unsigned char   code  * code  // code       code
    
    #define U16DX unsigned short  xdata * data  // data       xdata
    #define U16IX unsigned short  xdata * idata // idata      xdata
    #define U16CC unsigned short  code  * code  // code       code
    #define U16DC unsigned short  xdata * data  // data       xdata
    
    #define U32DX unsigned long   xdata * data  // data       xdata
    #define U32DC unsigned long   xdata * data  // data       xdata
    

  • I even went to my boss at the time to complain but to no avail. but what can you expect from somebody that makes statements like this: "I won't ask him why he does not bother to run a static code analyzer, because then I will have less ammunition to bust his ass if it does wrong". shocking, really.

  • Please explain these two:

    #define U16DC unsigned short  xdata * data  // data       xdata
    #define U32DC unsigned long   xdata * data  // data       xdata
    

    Your U16DC seems to be a U16DX and your U32DC seems to be a U32DX.

  • thanx,

    I evidently got an old uncorrected one.

    I am working from home and much stuff has not ben made urrent

    Erik