This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

35% smaller, and 14% faster code!

There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.

More details here: www.htsoft.com/.../silabs8051beta

  • If what you are saying is true, then the compiler that translated that fragment of code is broken.

    Why do you think that?

  • Not calling any names here, but that was the compiler supplied by the manufacturer of the chip, [...]

    I don't know why I so suddenly start to think about Microchip...

  • The programmer made one of two possible errors: Either blindly trusting the compiler to generate correct assembly code, or not religiously sifting through the compilers errata sheets to check for this situation.

    You've missed the point. I was after an example of the sort of error being discussed - a 'C' coding mistake caused by faulty logic or faulty implementation of correct logic. It's a given that one would have to inspect the assembly output if there is in fact no error in the 'C' code.

  • I don't know why I so suddenly start to think about Microchip...

    Never worked with any of their products, sorry. But I think there are alternative compilers available for their architectures.

    In my case, there was no alternative. And I guess the response from tech support would have been much, much different if I hard worked on a large-volume project (millions of units per year, like ... cellphones) instead of one with a paltry 10k to 100k units per year.

    Oh, and nastily enough, the compiler generated completely correct assembly if the debug symbols were turned on (yes, with everything else, including the optimization settings, being unchanged). Took me a while to figure out why I couldn't "reproduce" the error with my debug version, while it was perfectly reproducable with the release version.

  • Why do you think that?
    Well, apart from anything else, I pasted the code fragment into a C file and compiled it with a few of the variety of compilers I have on hand. All produced code that delivered the expected result. That's empirical confirmation of my assessment by inspection of the code that the description of the observed behaviour was at odds with the behaviour described by the C code itself.

  • I was after an example of the sort of error being discussed - a 'C' coding mistake caused by faulty logic or faulty implementation of correct logic.

    Well, any case of lawyer code (e.g. use of code with effects not specified by the C language standard) would suffice there. Even the most competent C programmer cannot tell whether the code will do what it is supposed to do without either knowing the implementation details of the compiler or looking at the generated assembly.

    (And no, I don't consider knowing by heart what

    some_function(++a, ++a);
    

    does on seven different compilers to be part of being a competent C programmer. A competent C programmer will know that this is heavily compiler dependent and avoid such expressions whenever possible. There is no way of knowing whether this will work as intended by just looking at the C code)

  • Regarding the example:

    some_function(++a, ++a);
    

    Who really writes code like this? Are the (questionable) optimizations of any side effects from such a line ever worth it?

    In our case, all people MUST undergo an intial period of training to ensure that the prescribed development rules are understood before they are let loose at writing code. Hence expressions like the above, and any resultant assumptions are avoided.

    Simple.

  • Who really writes code like this?

    People who don't know better (and you might have to debug their code at some point), people who don't care and people who are actively malicious.

    Are the (questionable) optimizations of any side effects from such a line ever worth it?

    Some people may think that writing a program with as few keystrokes as possible is a worthwhile goal.

    Granted, the example was blaringly obvious and should make anyone halfway familiar with C cringe. Any compiler with half a brain should emit a warning. However, MS VC++ doesn't seem to care about a = a++; ... other compilers I use do find this worth a warning.

  • "People who don't know better (and you might have to debug their code at some point), people who don't care and people who are actively malicious."

    I take your point on that one. I have come across similar dubious practice code in legacy projects.

    Not so long ago I was scanning over some code of a (supposedly senior) team member. There was a block of believable code, in a released project, that had a comment just above it stating:

    /* THIS CODE DOES NOT WORK */
    

    Not too surprisingly, the team member wasn't part of my team for much longer!

  • Well, apart from anything else, I pasted the code fragment into a C file and compiled it with a few of the variety of compilers I have on hand. All produced code that delivered the expected result.

    Ah, the forum made it look as though you were replying to Per Westermark's post, hence my question. It's a good idea to always quote a bit of the post you're replying to to avoid confusion.

  • Not too surprisingly, the team member wasn't part of my team for much longer!<p>

    Well, the question is: If the code (obviously) didn't work, why wasn't this caught during testing ? Or was the comment outdated and the code correct ?

  • No, not simple. Besides assuming that you do manage to teach them all to behave, you also assume that you really are in control of all paths of source code onto your table.

    Did you see my example? The library in question wasn't written inhouse, but because of policy reasons (sellers like partnerships, since it looks so nice on the web page...) you sometime have to integrate code you have suddenly got in your knee.

    Sometimes management buy products that you may have to take care of. Sometimes your products needs to be integrated with a customers product. Sometimes, someone decides to buy a magic library that will greatly decrease the development time of a new feature. Many ways to get new and strange code inside the house. Not all written by really competent developers.

  • By your implication, I was incompetent for assuming that the documented "function" actually was a function.

    Not at all.

    Sumething documented as a function should really behave as a function, don't you think?

    Absolutely.

    Life is a lot easier when you have written every single line of the code - as soon as someone else have been involved, you have to assume that they have followed the traditional best-practices or you will never manage to get a final product.

    Indeed.

    What I was after an example of was something to illustrate the premise I was querying, which was:

    You appear to be saying that given an error in a 'C' program that is caused by:

    a) Faulty logic
    or
    b) Faulty implementation of correct logic

    you might find yourself debugging at assembly level to spot the error?

    To that end I asked for:

    I would be interested to see a concrete example of an error that a competent 'C' programmer might make that would not be more easily spotted by reviewing the 'C' code rather than stepping through compiler generated assembly code.

    In other words, an algorithmic or logical error rather than one introduced by someone else's mistake.

    I'm interested because if there are situations like this, I certainly haven't come across them. If I find a bug when testing I know that the chances that the problem is in my code are high, so I check my code. I can't imagine why I might find it easier in this situation to look at compiler generated assembly rather than the source code I actually wrote.

  • A lot of debuggers allows you to watch _both_ your C code and assembler, so it does not represent a big disadvantage to look at the assembler.

    In my case, it showed where I was guilty of an incorrect assumption. I assumed that the library was written by a competent developer.

    But there could also be a situation where I am blind to my own errors, because parts of my brain have already decided that a specific piece of code _must_ be correct. Seeing both assembly + C could then kick my brain out of its incorrect track and have it starting to see what is really there, instead what it assumes is there.

    Have you ever looked at a table for your keys, and failed to see them just because your mind have already decided that they can't be there, or that they have to be on the right side of the desk, or that the bright red key ring that is sticking out under a paper just can't be your keys since you know that you haven't touched that paper since the last time you had your keys?

    Our brain is a marvel at pattern matching, which is the reason it is hopeless to try to write an application with any real intelligence. But an engine with too good pattern matching has a tendancy to sometimes find patterns where no patterns exists.

    I'm a lot beter to stay focused when looking at really advanced algorithms. Most overlooked errors are likely to be in the trival parts of the code - or maybe the debug printout that is left inside the algorithm. If you see 50 lines of non-trivial code, and three debug printouts, you are likely to skip over the debug lines and put all your focus on the "real" code. Such irrational - but not too uncommon - decisions can easily make you miss that little = instead of == in one of the printouts. Or maybe someone have been "optimizing" a bit and added a ++ in the printout, since that saves a line of code - until I come along and decides that the prinouts should be conditionally included...

    In the old days, we needed the assembler output since we couldn't trust the compilers. Todays compilers are so reliable that we can limit ourselves to take a peek at the code output for extremely time-critical code, look at compiler output to learn how to use the assembler instructions of a new processor, or now and then just to get our brains to switch track and start to process data again, instead of living on old assumptions.

  • Well, any case of lawyer code (e.g. use of code with effects not specified by the C language standard) would suffice there.

    A competent 'C' programmer wouldn't do that.

    What this boils down to for me is this:

    If you find yourself reaching for the ICE or stepping through compiler output on a regular basis you are either working with 3rd party junk rather than decent development tools or libraries, or the code you have written is junk. The 'have a go' programmers who 'don't care a hoot' about the standard find themselves unable to get anything to work without constant debugging which they are incapable of doing at source level. Why? Because they cannot tell whether the code they have written *should* work or not. They find out how it *actually* works by experimenting with the compiler, rather than just reading the damn manual.

    This is why the world is full of unreliable, unmaintainable junk.