This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

35% smaller, and 14% faster code!

There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.

More details here: www.htsoft.com/.../silabs8051beta

Parents
  • I totally agree!

    And, you rightly pointed out the High-Volume 'problem' in optimal hardware design. Making the trade between a $9 CPU and a $12 CPU multiplied by 300,000 units becomes a $900,000 decision. But the 10,000 added service calls you will get also costs money. That is what the managers get paid to decide (BTW I've been there).

    Having a debug [kernel] in the code (provided it is allowed) can greatly reduce the field problems, and as you've stated, it is amazing how many "semi-impossible situations" exist. The most cost effective weapon against this is "defensive programming." Defensive programming adds time and code, so the new design must account for that.

    The debug capability when "visiting a customer on the field" can be vital for both the customer and your company. Most of the embedded products do are in the realm of low to medium quantity production. The ability to quickly turn one of your "quality based products" with a field error into the operational state can offset the black-eye you got in the first place when it failed. Without the tools to quickly diagnose and repair the problem, that black-eye can cost you not only money but your "quality" reputation.

    Mr. Westermark is spot-on when it comes to being capable of quick (and easy) fixes to a system by designing in "overcapacity."

    An ICE is preferable over a simulator, but is only NEEDED when you have a serious hardware/software error. The High Volume (>500,000 units) company I worked for did not have an ICE. At first I thought that was funny, but with competent engineers, it is rarely needed... actually we did have an ICE that was super old and lame so it collected dust. It was used once in three years I was there, and it was re-confirmed as "useless". In some cases, you simply cannot find the problem without an ICE. ...Bond-outs!

    The over-reliance upon an ICE may inhibit the developer's clarity of thought. Kind of like the Flash versus UV-EPROM code memories did/do/does. When you had only five or ten EPROMS erased, and each erase cycle took four+ hours, you didn't willie-nillie try this-n-that to see if it this modification now works. You had to THINK about it first.

    With that in mind, the developer who over-relies upon an ICE might not realize the importance of having the debug capabilities that Per Westermark is advocating. By thinking ICE-less for awhile, you'll learn what kinds of field testing demands there are out there, and how to code-up some really worth-while debugging techniques/services. Like Per said, having an ICE is not always a choice. Using your brain is... keep it sharpened.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

Reply
  • I totally agree!

    And, you rightly pointed out the High-Volume 'problem' in optimal hardware design. Making the trade between a $9 CPU and a $12 CPU multiplied by 300,000 units becomes a $900,000 decision. But the 10,000 added service calls you will get also costs money. That is what the managers get paid to decide (BTW I've been there).

    Having a debug [kernel] in the code (provided it is allowed) can greatly reduce the field problems, and as you've stated, it is amazing how many "semi-impossible situations" exist. The most cost effective weapon against this is "defensive programming." Defensive programming adds time and code, so the new design must account for that.

    The debug capability when "visiting a customer on the field" can be vital for both the customer and your company. Most of the embedded products do are in the realm of low to medium quantity production. The ability to quickly turn one of your "quality based products" with a field error into the operational state can offset the black-eye you got in the first place when it failed. Without the tools to quickly diagnose and repair the problem, that black-eye can cost you not only money but your "quality" reputation.

    Mr. Westermark is spot-on when it comes to being capable of quick (and easy) fixes to a system by designing in "overcapacity."

    An ICE is preferable over a simulator, but is only NEEDED when you have a serious hardware/software error. The High Volume (>500,000 units) company I worked for did not have an ICE. At first I thought that was funny, but with competent engineers, it is rarely needed... actually we did have an ICE that was super old and lame so it collected dust. It was used once in three years I was there, and it was re-confirmed as "useless". In some cases, you simply cannot find the problem without an ICE. ...Bond-outs!

    The over-reliance upon an ICE may inhibit the developer's clarity of thought. Kind of like the Flash versus UV-EPROM code memories did/do/does. When you had only five or ten EPROMS erased, and each erase cycle took four+ hours, you didn't willie-nillie try this-n-that to see if it this modification now works. You had to THINK about it first.

    With that in mind, the developer who over-relies upon an ICE might not realize the importance of having the debug capabilities that Per Westermark is advocating. By thinking ICE-less for awhile, you'll learn what kinds of field testing demands there are out there, and how to code-up some really worth-while debugging techniques/services. Like Per said, having an ICE is not always a choice. Using your brain is... keep it sharpened.

    --Cpt. Vince Foster
    2nd Cannon Place
    Fort Marcy Park, VA

Children
  • An awful lot of blurb produced by simple mickey-take!

    I don't feel offended that Erik would be preferred at the professional level. He obviously knows how to use an ICE and (apparently) I do not??!!

    In general, the comments of Per and Vince are ones I would agree with.

    Anyway, WTF - I am a Keil user, have been for many years and still believe that it is the best tool for the '51 developments that I have been involved it.

  • I don't think anyone is contradicting you. Erik is also using the Keil compiler. Most probably because he wants to, and not because he is forced to.

    I have currently done a bit of Pic programming, using the Microchip compiler since Keil doesn't support that architecture. The "nice" compiler doesn't even support the full grammar... And at least one construct where it failed to produce a binary - currently unknown if the error is in the compiler or the linker.

    We developers do like good tools - it's just that our definition of good (and what kind of tools we prefer) varies with previous experiences and with type of products we develop. A one-man project has different requirements than a 100-man project. A life-sustaining equipment has different requirements than a set-top box. The people developing the set-top box would never manage to finish their product if they had to guarantee the reliability according to the standards needed of a pace-maker.

    Because of the different needs, debates about A _or_ B often results in heated debates that doesn't lead to one or he other side switching opinion and most of the time without making any readers switch position either. Debates that focus on comparing two alternatives have a lot better chance to make people listen and try the alternative, since the debate isn't about right or wrong but about how one tool can complement another, or (if the discussion isn't between combatants) leads to information how users of tool A (who can't afford tool B) can find reasonable workarounds to still be productive.

  • Per,

    I too have recently used the Microchip C compiler.

    The Keil C51 compiler certainly has it's quirks; but compared to the Microchip "nice" compiler and it's quirks, my opinion is that the Keil is more predictable in it's output and therefore preferable.

    Now I have had the opportunity to migrate across to the Keil ARM compiler. One comment from me with regards to the compiler - Absolutely lovely!

  • I am a Keil user, have been for many years and still believe that it is the best tool for the '51 developments that I have been involved it.
    so do I. But is there anything wrong with wanting 'the best' better.
    I have no problems whatsoever with the optimizations provided by Keil (they are optional and I do not use them) I just want another level where some are implemented and the "debuggability killers" are not.

    Someone above talked about "debugging by printf". That will work in many cases, but in "my world" where everything hangs on nanoseconds, inserting a time consuming call can make the "debug aid" a bug.

    Erik