There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.
More details here: www.htsoft.com/.../silabs8051beta
It is common practice for vendors benchmark their products against themselves and their competitors:
http://www.keil.com/benchmarks/
I have received benchmark results from IAR compared with other vendors in the past.
I have received benchmark results from Keil compared to other vendors in the past.
What's the big deal, that you were notified via a forum post instead of direct mail?
No. The big deal is that the OP found no fault in posting this on a competitor's web forum. Had they used their own forum, or a neutrally operated one, that'd be an entirely different story.
"The big deal is that the OP found no fault in posting this on a competitor's web forum."
So, what's the big deal? Ultimately, whether either (the post itself or the OP's finding no fault posting) is a big deal or not will be determined by the thread's longevity.
Since we are talking about thread longevity, we make it a bit harder for Keil to remove the thread by challenging them a bit.
However, I do not think that it is ok to put a brochure in a competitors display rack - I find it unethical. Switching over to the electronic world doesn't make a difference.
An end user may post benchbark results on just about any open forum but a company or its employees should not - repeat not - post on a competitors forum unless it is deemed as required to defend themselves. Marketing own products on a competitors forum is a very big no-no. Big enough no-no that it would probably be best for Keil to leave this thread here. It shows a bit about ethics - if you are the "clyde" that posted the benchmark on the htsoft forum.
I think Per is right... if the post was indeed by THE Clyde Stubbs, Keil will most likely keep this thread.
The meat-n-bones of this thread, the Break-Point-Ability" debug versus "Professionals who are capable of writiting (sic) functional code" positions are indeed fun to read when you have Erik sparing with the Dark Side.
I would [Professionally] hire Mr. Erik Muland over Mr. Matte Black simply due to Erik's acknowledgment that debugging is more important than those last few percentage points of efficiency. To argue the reverse, shows that you are not really designing a system properly and/or do not have the authority to control the design. You are doing somebody else's bidding, and/or promoting an error in logic.
That "last few percent" requirement means that you have one of two problems you are trying to solve:
1) not enough CPU power/speed 2) not enough code-space
Both of these points means that the system was not designed properly to handle the expected loads. There are enough embedded processors out there with enough means (peripherals, memory capacity--on/off chip, MIPS, etc) to meet nearly all of the embedded needs out there. The rare few already have tried to meet such a challenge with limited resources. BUT SOMEBODY made the decision and its part of the engineering challenge.
A new design that requires the C Compiler (a tool) to be far more optimal than what the industry standard has achieved should be a warning about the selected CPU / speed used in the design. ESPECIALLY if the "debug-ability" and "uniquely break-point-able" is hindered. (The experienced professional knows that there is more to "break-points" than fixing poorly written code; such as unit level FQT. Mr. Black, in "professional-eze" FQT is known as "Formal Qualification Testing").
An old design, requiring the extra 35% / 17% improvement, is an indicator that a decision has been made to increase the expectations and requirement of the software system without providing the hardware to support the increased system needs. Usually this is done because of the well known and <sarcasm>Indisputable Management Fact that "Software is Free" and hardware is not</sarcasm>. Other times, it is because of a locked-down design. The point here is that somebody made the decision to boost the system without full knowledge of what that boost really means.
Don't get me wrong, having an increase in performance is great, but to sacrifice the debug-ability is a design error.
In academia (a term used to express the absence of business constraints or in some-cases, reality), the system is redesigned to meet the needs. In the real-world embedded engineering environment, the software must fit within the constraints. But somebody is still responsible for those constraints, and usually the factor inhibiting the speed/code-size redesign is management. They base it upon two things: 1) cost and 2) what sells. Your Engineering Ideals come in behind those.
Cost can be unit-cost to development costs, what sells might be total quantity or total quality. It doesn't matter, but the point is that the Engineer's Ideal of a perfectly designed system found in 'academia' is not part of the top criteria in the real business world. That is where this Hi-Tech product meets a need. Companies who want to use the same-old product hardware, yet take advantage of the "cost-less" software based improvements will be forcing their engineers to resort to the sacrifices between sufficient tools and sufficient brains.
Yes, highly optimized code can be debugged/validated, but it takes time. The engineer is responsible for the Time = Money factor. The more brains the engineer has, the less time it takes, thus the less money it costs.
Brad's Big Brain must get bigger and bigger as management DECIDES not to improve the hardware to meet the *new* needs. Seeking refuge in a particular tool is just a quick fix to this underlying problem.
The compiler efficiency improvement is highly welcomed, but the compiler must also meet the needs of the debugging/testing phases, and allow Brad's retarded brother (and we've seen them on this forum), to "get the job done."
I think Keil has done a good job in optimization, debugging and testing. So IMHO Mr. Matte Black's comment about "It would appear that he [erik] is in a minority and it is not justifiable, else compiler writers would already support his [erik] requirements." is in error. Keil does it.
I must also state that I have never used, or know the capabilities of, Hi-Tech products. They might be good too.
--Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
No new product should require optimum use of the hardware - unless it is a simple product built in large volumes, and it is known that the processor family has variants with larger flash and/or higher CPU frequency so that it is possible to just order a batch of bigger/faster processors to release a premium edition.
But since the lead times for new chips can be quite high, it is dangerous to use a fully slimmed solution even if a bigger/better chip is available. In case a little "gotcha" is found really late in the release cycle, the lead time to get the better chips may result in several months delayed release, probably costing very much more than the price difference between the two chip models (ignoring the goodwill loss from unhappy customers/resellers).
Another thing to think about is to have some extra speed/code space for instrumenting the code with some defensive bug catchers or field-debug abilities. It is really, really nice to have a bit of extra code that constantly checks important variables for invalid states, out-of-bounds parameters etc and potentially emits a printout, calls a catch-all breakpoint function, ... and then either resets the unit or repairs the parameter to a valid value.
I must admit that I'm not too fond of ICE debugging. I prefer to read code, and possibly add an extra printout to verify an assumption. But it really, really helps if the final product can keep some of the asserts etc. If nothing else, it means that I can get the same timing when debugging and when making a release.
ICE debugging also often requires extra connectors that are not mounted for the mass-produced units, and if visiting a customer on the field, it might sometimes be nice to be able to have a peek inside an installed unit - possibly connected to misbehaving external equipment that isn't available at my office. Black-box testing will only verify test cases I have managed to figure out - but the real world has a tendancy to create a number of semi-impossible situations too.
Having the ability to use spare interface bandwidth to dump memory contents (using CAN, UART, SPI, ...) from a live installation can be a real life-saver, and in some situations it might not even be really safe to connect an ICE. I don't know about too many opto-isolated ICE tools.
An ICE is a valuable tool, but it is one of many, and I prefer - not always a choice I'm in control off :) - to catch my bugs before I need to pick up my ICE. Some people may prefer to pick up the ICE earlier, some later, but it may also be a question about what type of software that is developed. A traditional problem for debuggers is to stop at a breakpoint in a multi-threaded application. What happens with the other threads if they are expected to serve hardware in real-time, and a producer or consumer suddenly doesn't produce/consume data at the speed required by the hardware? An ICE can't stop the PC from sending serial data into the target, and the hardware FIFO can only swallow so many characters before overflowing... With a bit of overcapacity in code size and speed, I may get my information without the breakpoint and without breaking the real-time requirements.
I totally agree!
And, you rightly pointed out the High-Volume 'problem' in optimal hardware design. Making the trade between a $9 CPU and a $12 CPU multiplied by 300,000 units becomes a $900,000 decision. But the 10,000 added service calls you will get also costs money. That is what the managers get paid to decide (BTW I've been there).
Having a debug [kernel] in the code (provided it is allowed) can greatly reduce the field problems, and as you've stated, it is amazing how many "semi-impossible situations" exist. The most cost effective weapon against this is "defensive programming." Defensive programming adds time and code, so the new design must account for that.
The debug capability when "visiting a customer on the field" can be vital for both the customer and your company. Most of the embedded products do are in the realm of low to medium quantity production. The ability to quickly turn one of your "quality based products" with a field error into the operational state can offset the black-eye you got in the first place when it failed. Without the tools to quickly diagnose and repair the problem, that black-eye can cost you not only money but your "quality" reputation.
Mr. Westermark is spot-on when it comes to being capable of quick (and easy) fixes to a system by designing in "overcapacity."
An ICE is preferable over a simulator, but is only NEEDED when you have a serious hardware/software error. The High Volume (>500,000 units) company I worked for did not have an ICE. At first I thought that was funny, but with competent engineers, it is rarely needed... actually we did have an ICE that was super old and lame so it collected dust. It was used once in three years I was there, and it was re-confirmed as "useless". In some cases, you simply cannot find the problem without an ICE. ...Bond-outs!
The over-reliance upon an ICE may inhibit the developer's clarity of thought. Kind of like the Flash versus UV-EPROM code memories did/do/does. When you had only five or ten EPROMS erased, and each erase cycle took four+ hours, you didn't willie-nillie try this-n-that to see if it this modification now works. You had to THINK about it first.
With that in mind, the developer who over-relies upon an ICE might not realize the importance of having the debug capabilities that Per Westermark is advocating. By thinking ICE-less for awhile, you'll learn what kinds of field testing demands there are out there, and how to code-up some really worth-while debugging techniques/services. Like Per said, having an ICE is not always a choice. Using your brain is... keep it sharpened.
An awful lot of blurb produced by simple mickey-take!
I don't feel offended that Erik would be preferred at the professional level. He obviously knows how to use an ICE and (apparently) I do not??!!
In general, the comments of Per and Vince are ones I would agree with.
Anyway, WTF - I am a Keil user, have been for many years and still believe that it is the best tool for the '51 developments that I have been involved it.
I don't think anyone is contradicting you. Erik is also using the Keil compiler. Most probably because he wants to, and not because he is forced to.
I have currently done a bit of Pic programming, using the Microchip compiler since Keil doesn't support that architecture. The "nice" compiler doesn't even support the full grammar... And at least one construct where it failed to produce a binary - currently unknown if the error is in the compiler or the linker.
We developers do like good tools - it's just that our definition of good (and what kind of tools we prefer) varies with previous experiences and with type of products we develop. A one-man project has different requirements than a 100-man project. A life-sustaining equipment has different requirements than a set-top box. The people developing the set-top box would never manage to finish their product if they had to guarantee the reliability according to the standards needed of a pace-maker.
Because of the different needs, debates about A _or_ B often results in heated debates that doesn't lead to one or he other side switching opinion and most of the time without making any readers switch position either. Debates that focus on comparing two alternatives have a lot better chance to make people listen and try the alternative, since the debate isn't about right or wrong but about how one tool can complement another, or (if the discussion isn't between combatants) leads to information how users of tool A (who can't afford tool B) can find reasonable workarounds to still be productive.
Per,
I too have recently used the Microchip C compiler.
The Keil C51 compiler certainly has it's quirks; but compared to the Microchip "nice" compiler and it's quirks, my opinion is that the Keil is more predictable in it's output and therefore preferable.
Now I have had the opportunity to migrate across to the Keil ARM compiler. One comment from me with regards to the compiler - Absolutely lovely!
I am a Keil user, have been for many years and still believe that it is the best tool for the '51 developments that I have been involved it. so do I. But is there anything wrong with wanting 'the best' better. I have no problems whatsoever with the optimizations provided by Keil (they are optional and I do not use them) I just want another level where some are implemented and the "debuggability killers" are not.
Someone above talked about "debugging by printf". That will work in many cases, but in "my world" where everything hangs on nanoseconds, inserting a time consuming call can make the "debug aid" a bug.
Erik
I do not think that it is ok to put a brochure in a competitors display rack - I find it unethical. Switching over to the electronic world doesn't make a difference.
The reason that HT has posted their advertisement here simply has to do with traffic and volume. The Keil forum has it and the HT forum does not. Even so, I'm not sure that it makes much sense to remove it.
Every few years, another Keil competitor has a "new" compiler release that generates smaller and faster code. At Keil, we welcome this kind of innovation and embrace how it helps to expand and grow the 8051 microcontroller marketplace.
Jon