This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

35% smaller, and 14% faster code!

There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.

More details here: www.htsoft.com/.../silabs8051beta

  • Even if the embedded engineer doesn't do such formal qualification testing methods, the "debugging" process is basically the same as the FQT. You break at a known point (in "C") and track/trace the flow and data through the unit under test (the function or suspected area of error), and figure out why your logic/implementation is wrong.

    Typically this is done at the assembly level and not simply at the "C" level since the 'competent' can easily see the "C" level errors quickly. It is our 'advanced' errors in thought that we spend all of our time "debugging" the assembly.

    I'm not sure I have understood this. You appear to be saying that given an error in a 'C' program that is caused by:

    a) Faulty logic
    or
    b) Faulty implementation of correct logic

    you might find yourself debugging at assembly level to spot the error?

  • If you use #define expressions you may get into troubles with multiple increments/decrements that isn't visible when you single-step the C source. You either have to run the pre-processor to get the code expansion, or single-step the assembler code.

    C++ also have a bit of magic that can require assembler debugging unless you already "know" where you should put a breakpoint to catch the next step.

  • Yes. that is what I am saying. Most errors are either faulty logic or the faulty implementation of correct logic... and of course the faulty implementation of fautly logic.

    At the "C" level, these can be found 'easily' since your emulator/simulator can show you the logical flow as you single-step through the high-level "C" code, and you watch the data-stores change accordingly. But since the 'hard' problems take a much larger percent of our debugging time, we usually are single-stepping through the underlying assembly code that supports each of the "C" statements in order to find our mistakes. Per Westermark just made a clear and common 'error' that usually is found at the assembly level.

    The reason I elaborate on this distinction has to do with the highly-optimized code that causes the underlying assembly language to be seemingly scattered and dis-jointed due to the use of shared code segments and other "odd looking" (but valid) code that the compiler may generate.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

  • Hi Vince, thanks for the reply. I can tell you (hopefully without pushing anyone's buttons :-) that the results I mentioned in the post that started this thread do not depend on any inlining or reverse inlining - i.e. the code IS uniquely breakpointable.

    Those optimizations are on the agenda, but they're not dependent on OCG, which is the really new technology that has delivered the improved performance.

    The "obfuscated data store" question depends on the capabilities of the debugger (and debug format). The compiler does provide full information as to where variables are stored (which can change between memory and registers) at any given point in the program, but many debuggers and debug formats do not have the capability to make use of this. Elf/Dwarf is AFAIK the most capable format in this regard.

    I also appreciate your comments on selectability of optimizations. This is precisely the kind of feedback I was looking for in starting this thread. OCG is a powerful technology, but the final objective is to deliver to engineers what they need - and this thread clearly illustrates that different people have different needs!

    Clyde

  • The major reason why this would happen is that the compiler or linker has performed what is known as "procedural abstraction" or "reverse inlining", i.e. a section of code common to two or more places in the program has been moved aside and replaced with calls to the common code section.

    Is that an accurate summary of your concerns?

  • Hi Clyde,

    remember me from the beta days of tha XA compiler?

    Those optimizations are on the agenda, but they're not dependent on OCG, which is the really new technology that has delivered the improved performance
    to repeat my point ANY optimizations is desirable to SOME, the issue is an extremely flexible 'menu' of which you want to suit YOUR environment.

    Erik

  • Erik, I remember the XA, but not you, I'm afraid. But then I think I've forgotten more than I know, so don't take it personally :-)


  • I do not think that it is ok to put a brochure in a competitors display rack - I find it unethical. Switching over to the electronic world doesn't make a difference.

    The reason that HT has posted their advertisement here simply has to do with traffic and volume. The Keil forum has it and the HT forum does not. Even so, I'm not sure that it makes much sense to remove it.

    Every few years, another Keil competitor has a "new" compiler release that generates smaller and faster code. At Keil, we welcome this kind of innovation and embrace how it helps to expand and grow the 8051 microcontroller marketplace.

    Jon

  • Per Westermark just made a clear and common 'error' that usually is found at the assembly level.

    I'm not sure that I see that as an example of something that might need to be debugged at assembly level. It's a straightforward 'C' level error caused by a failure to understand either side-effects, macro expansion rules or both. As such I wouldn't expect a competent 'C' programmer to make the error, never mind be unable to spot it at the 'C' level.

    I would be interested to see a concrete example of an error that a competent 'C' programmer might make that would not be more easily spotted by reviewing the 'C' code rather than stepping through compiler generated assembly code.

  • I have to agree with Jack, although I think that making mistakes has little to do with what he considers "competence". When I have trouble with macros I usually look at the preprocessor output, I don't bother to debug them in assembly.

  • I would be interested to see a concrete example of an error that a competent 'C' programmer might make that would not be more easily spotted by reviewing the 'C' code rather than stepping through compiler generated assembly code.

    Here's one. Taken from real life, slightly simplified.

    unsigned int i;
    unsigned int some_array[12];
    
    ...
    
    for(i = 8; i < 12; i++)
    {
       some_array[i] = 0xFF;
    }
    

    After the loop, some_array[9...11] were found to be unmodified. No other tasks or ISRs are access some_array at the same time. Did you find the error in the C code ?

  • I got a bit of code written by another developer, and containing a library.

    What wasn't obvious whas that the nice guy had decided to create a function-looking #define without the very common curtesy to select all capitals.

    Would you suspect the following code to step the pointer twice?

    while (*msg) put_data(*msg++);
    

    By your implication, I was incompetent for assuming that the documented "function" actually was a function. Sumething documented as a function should really behave as a function, don't you think?

    Since I assumed it to be a function (as the documentation claimed), I saw no need to look at any preprocessor output. However, single-stepping through the code with mixxed assembler/C made it obvious that the function call did not do what I expected, and why the extra increment managed to step past the termination character. If msg had had multiple characters, I might have noticed that only characters at even positions was emitted, but in this case my only character was emitted (as expected), but then followed by a very large number of random junk.

    Life is a lot easier when you have written every single line of the code - as soon as someone else have been involved, you have to assume that they have followed the traditional best-practices or you will never manage to get a final product.

  • If what you are saying is true, then the compiler that translated that fragment of code is broken. Use a different compiler - one that you can trust.

  • Did you find the error in the C code ?

    Given that snippet in isolation I can see no error. Please enlighten me.

  • Given that snippet in isolation I can see no error. Please enlighten me.

    There isn't one (the snippet was all that was necessary to reproduce the error, without any ISRs or multitasking). The programmer made one of two possible errors: Either blindly trusting the compiler to generate correct assembly code, or not religiously sifting through the compilers errata sheets to check for this situation.

    Looking at the assembly code, however, it became quite clear that the compiler generated a completely bogus target address for the looping command used in the for-loop, which caused the microcontroller to jump out of the loop after the first iteration.

    Not calling any names here, but that was the compiler supplied by the manufacturer of the chip, with no alternative compilers available. When presented with the C code and the corresponding assembly, their tech support commented "We do not think this is a compiler bug.". I've not contacted them again after this. Most of the program was written in assembly, anyway, which was probably a good thing.