This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

35% smaller, and 14% faster code!

There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.

More details here: www.htsoft.com/.../silabs8051beta

Parents
  • Well, any case of lawyer code (e.g. use of code with effects not specified by the C language standard) would suffice there.

    A competent 'C' programmer wouldn't do that.

    What this boils down to for me is this:

    If you find yourself reaching for the ICE or stepping through compiler output on a regular basis you are either working with 3rd party junk rather than decent development tools or libraries, or the code you have written is junk. The 'have a go' programmers who 'don't care a hoot' about the standard find themselves unable to get anything to work without constant debugging which they are incapable of doing at source level. Why? Because they cannot tell whether the code they have written *should* work or not. They find out how it *actually* works by experimenting with the compiler, rather than just reading the damn manual.

    This is why the world is full of unreliable, unmaintainable junk.

Reply
  • Well, any case of lawyer code (e.g. use of code with effects not specified by the C language standard) would suffice there.

    A competent 'C' programmer wouldn't do that.

    What this boils down to for me is this:

    If you find yourself reaching for the ICE or stepping through compiler output on a regular basis you are either working with 3rd party junk rather than decent development tools or libraries, or the code you have written is junk. The 'have a go' programmers who 'don't care a hoot' about the standard find themselves unable to get anything to work without constant debugging which they are incapable of doing at source level. Why? Because they cannot tell whether the code they have written *should* work or not. They find out how it *actually* works by experimenting with the compiler, rather than just reading the damn manual.

    This is why the world is full of unreliable, unmaintainable junk.

Children
  • If you find yourself reaching for the ICE or stepping through compiler output on a regular basis you are either working with 3rd party junk rather than decent development tools or libraries, or the code you have written is junk.

    I think I have to agree with that. Most code either works on the first run, or reading the code is enough to see what ails it. A bit of guard code can help in case I have made an incorrect assumption about the value range of the input, or in case I'm inserting the new code in already broken code.

  • I think I have to agree with that. Most code either works on the first run, or reading the code is enough to see what ails it
    Try, for instance to write an interface to a FTDI vinculum and debug it by the above method, you will die before the program runs.

    there are beautiful debugging theories based on all information given is complete and correct just one comment male cow manure.

    Erik

  • ...unable to get anything to work without constant debugging which they are incapable of doing at source level.

    I agree. Among other sins, this approach yields code that is less maintainable and probably less portable.

  • Among other sins, this approach yields code that is less maintainable and probably less portable.
    your debugiing method has NOTHING to do with code being "less maintainable". What makes code amongst other things "less maintainable" is 'fixing' bugs instead of removing them.

    Erik

  • Erik,
    Were you not, at least once in your (presumably) long career, been tempted to win a few nanos here and there by using a platform specific, nasty trick? That's what this is all about. Jack belong to the school of standards and "working by the book". you are the guy that wants to get the work done without loosing a single nanosecond. I do admire your approach, but what I have seen so far persuaded me to join the camp of people who like to hide behind the standard. Maybe it is because I live in the universe of milliseconds critical applications, not less (for now).

  • Were you not, at least once in your (presumably) long career, been tempted to win a few nanos here and there by using a platform specific, nasty trick?
    the only "platform specific tricks" I have implemented are 'tricks' specific to the particular derivative I am working with. If a given derivative as far as platform specific has e.g. multiple datapointers and the compiler does not use them, I will, in a time critical application, go straight to assembler. And there I use very platform(derivative) specific trick.

    If by platformk specific you refer to the (re the '51)stupid 'portability' (who has ever heard of a 'small embedded" project being 'ported') I confess that if the Keil compiler aloow me to specify DATA I do it.

    Erik