This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

35% smaller, and 14% faster code!

There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.

More details here: www.htsoft.com/.../silabs8051beta

Parents
  • By your implication, I was incompetent for assuming that the documented "function" actually was a function.

    Not at all.

    Sumething documented as a function should really behave as a function, don't you think?

    Absolutely.

    Life is a lot easier when you have written every single line of the code - as soon as someone else have been involved, you have to assume that they have followed the traditional best-practices or you will never manage to get a final product.

    Indeed.

    What I was after an example of was something to illustrate the premise I was querying, which was:

    You appear to be saying that given an error in a 'C' program that is caused by:

    a) Faulty logic
    or
    b) Faulty implementation of correct logic

    you might find yourself debugging at assembly level to spot the error?

    To that end I asked for:

    I would be interested to see a concrete example of an error that a competent 'C' programmer might make that would not be more easily spotted by reviewing the 'C' code rather than stepping through compiler generated assembly code.

    In other words, an algorithmic or logical error rather than one introduced by someone else's mistake.

    I'm interested because if there are situations like this, I certainly haven't come across them. If I find a bug when testing I know that the chances that the problem is in my code are high, so I check my code. I can't imagine why I might find it easier in this situation to look at compiler generated assembly rather than the source code I actually wrote.

Reply
  • By your implication, I was incompetent for assuming that the documented "function" actually was a function.

    Not at all.

    Sumething documented as a function should really behave as a function, don't you think?

    Absolutely.

    Life is a lot easier when you have written every single line of the code - as soon as someone else have been involved, you have to assume that they have followed the traditional best-practices or you will never manage to get a final product.

    Indeed.

    What I was after an example of was something to illustrate the premise I was querying, which was:

    You appear to be saying that given an error in a 'C' program that is caused by:

    a) Faulty logic
    or
    b) Faulty implementation of correct logic

    you might find yourself debugging at assembly level to spot the error?

    To that end I asked for:

    I would be interested to see a concrete example of an error that a competent 'C' programmer might make that would not be more easily spotted by reviewing the 'C' code rather than stepping through compiler generated assembly code.

    In other words, an algorithmic or logical error rather than one introduced by someone else's mistake.

    I'm interested because if there are situations like this, I certainly haven't come across them. If I find a bug when testing I know that the chances that the problem is in my code are high, so I check my code. I can't imagine why I might find it easier in this situation to look at compiler generated assembly rather than the source code I actually wrote.

Children
  • A lot of debuggers allows you to watch _both_ your C code and assembler, so it does not represent a big disadvantage to look at the assembler.

    In my case, it showed where I was guilty of an incorrect assumption. I assumed that the library was written by a competent developer.

    But there could also be a situation where I am blind to my own errors, because parts of my brain have already decided that a specific piece of code _must_ be correct. Seeing both assembly + C could then kick my brain out of its incorrect track and have it starting to see what is really there, instead what it assumes is there.

    Have you ever looked at a table for your keys, and failed to see them just because your mind have already decided that they can't be there, or that they have to be on the right side of the desk, or that the bright red key ring that is sticking out under a paper just can't be your keys since you know that you haven't touched that paper since the last time you had your keys?

    Our brain is a marvel at pattern matching, which is the reason it is hopeless to try to write an application with any real intelligence. But an engine with too good pattern matching has a tendancy to sometimes find patterns where no patterns exists.

    I'm a lot beter to stay focused when looking at really advanced algorithms. Most overlooked errors are likely to be in the trival parts of the code - or maybe the debug printout that is left inside the algorithm. If you see 50 lines of non-trivial code, and three debug printouts, you are likely to skip over the debug lines and put all your focus on the "real" code. Such irrational - but not too uncommon - decisions can easily make you miss that little = instead of == in one of the printouts. Or maybe someone have been "optimizing" a bit and added a ++ in the printout, since that saves a line of code - until I come along and decides that the prinouts should be conditionally included...

    In the old days, we needed the assembler output since we couldn't trust the compilers. Todays compilers are so reliable that we can limit ourselves to take a peek at the code output for extremely time-critical code, look at compiler output to learn how to use the assembler instructions of a new processor, or now and then just to get our brains to switch track and start to process data again, instead of living on old assumptions.

  • Mr. Sprat, those logic/implementation errors are found in the "C" construct in most of the occurrences... not necessarily most of the 'time.'

    You can find ten of those errors in ten minutes but the one that which takes two hours is the type I'm talking about. Hence, most of your 'time' is spent debugging something that is not glaringly obvious that the faulty "C" implementation of faulty logic is in error. Usually the problems that take the most time are those that are a result of 'undocumented features' in the data-sheets, somebody else's code, or self generated. And for those long duration bugs, you'll need to delve into the assembly.

    Your request for a logic/implementation flaw that is more easily found at the assembly level was fulfilled by Mr. Westermark's #define type errors. Yes it is possible to deduce it from within the "C" platform, but a quicker approach is to validate the post-processor result of the #define at the assembly level. It was a valid example of your request.

    But an example of a 'bug' that is more easily found in the hand tracing level of assembly code is this one (yes, it is an 'undocumented feature' of Keil' optimization settings, but it could be argued that I created it myself too)...

    extern u16 Get_ADC_Level( void );
    
    #define MIN_VOLTAGE  (2458)   // 2457.6 = ((FULL_SCALE)/2)+(FULL_SCALE*0.10))
    
    void Intruder_isr( void ) interrupt 0 using 2
    {
        u16  val;
    
        do
        {
            Charge_Pump( );          // should take 10mS/kV
    
            val = Get_ADC_Vector( ); // lsb = 0.043 kV
    
        } while( val < MIN_VOLTAGE ); // removed time-out and hi-rel code
                                      // for 'example' clarity
    }
    

    The "Get_ADC_Vector( )" function is external and in another module--as you would expect. The compiler (Keil) compiles and links with zero errors/warnings.

    By having "Global Register Coloring" enabled in Keil's "Code Optimization" options, the Keil compiler did/does not account for the register bank switch from the "using 2" directive, so the parameter passed back from the function is in error: the compiler generates code that accessed the registers absolutely and tries to place them into the function-return registers.

    Keil should have either identified that you must use "Don't use absolute register access" when you use "Global Register Coloring" and you are using the "using" directive in your code, -OR- Keil's optimizer should have properly handled it.

    This example of an assembly traced bug didn't take too long because I realized that the 'using' directive does modify the reg-bank and I knew it was a risk point.

    But initially I, like any typical user, was relying on Keil to handle that deviation properly: especially since the pre-optimization proved valid. (An OCG would 'see' this cross module error and avoid it)

    FYI: My cure was to eliminate the "using" directive since it was clearly not a need. This is because I take the time/overhead to call two functions, from within the ISR, I could obviously afford the time delays of not switching banks. I also un-checked the "Global Register Coloring" since Keil proved that you cannot trust it. Keil's own documentation does not clarify the conflict.

    Sprat, the real point I was making is that most of a "competent" embedded engineer's TIME is spent in dealing with "bugs" at the assembly level. The bugs that are cured at the "C" level are the easy "Doh! what a stupid mistake" type while the gotcha's are not as easily found and do require assembly level tracing.

    Hopefully the underlying assembly code is not so mangled as to make it hard to trace. (such as register optimization where some needed data-store is held in R3 since the optimizer knows that it can stay there until it is needed, and doesn't have to write it out to the data space just to keep the data-element current. So then you can find the [traced] code comparing against a 'mysterous' R3 that was loaded a long time ago instead of comparing against 'Critical_Threshold' which is what you expected to see).

    "find patterns where no patterns exists" == "code hallucinations"

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

  • You can find ten of those errors in ten minutes but the one that which takes two hours is the type I'm talking about. Hence, most of your 'time' is spent debugging something that is not glaringly obvious that the faulty "C" implementation of faulty logic is in error. Usually the problems that take the most time are those that are a result of 'undocumented features' in the data-sheets, somebody else's code, or self generated. And for those long duration bugs, you'll need to delve into the assembly

    The issue here is that debugging is not pantyhose (one size does NOT fit all). There are individual methods to the sequence ( look at source, check it in the ICE, do a, most likely hopeless, try with a simulator, insert some printf ...) but to find all bugs in a timely manner the most dangerous attitude is "THIS is THE way".

    Yes, I do, occasionally resort to looking at the assembler, that is, indeed, a tool in my toolchest and it has, at times helped immensely. Does that make it "the right debugging method", of course not, but neither does it make it "the wrong debugging method".

    Erik