There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.
More details here: www.htsoft.com/.../silabs8051beta
1) No reference to "best choice" of optimizing flags for the two compilers.
2) Too small application to show difference in code size - remember that size of RTL affects small projects more.
3) How much of the code optimization results in speed changes for other applications? The Dhrystone isn't exactly relevant for an 8-bit microcontroller with 1-bit instructions...
They really have to produce more information before making any claims in one direction or the other. Compiling an application that makes use of a lot of one-bit variables and compare between two compilers, and then compila a program using a lot of 16-bit orh 32-bit variables and you will see that the comparisons will vary a lot. Code size and speed can only be deduced from a significantly large code base of very varying - but applicable - code.
I am sure, anyone with enough experience can make a compiler that will make faster and more compact code. who gives a hoot if the result is not 'uniquely breakpointable'. Th result from Keil can be much better if you use a higher optimization level, buy who in his/her right mind would use code the emulator can not 'uniquely' breakpoin on. Sorry, I have now offended a lot of people, but debuggability is far more important than those last few percent efficiency. What really offend me is that nobody (yet) has made a compiler/linker/optimizer that fully maintain program flow and is optimized in all other respects.
Erik
PS Clyde, do you really think it is apprpiate to promote your stuff on a website run by a competitor?
bresk 'theraded' nefficient
If you write code with the same cavalier manner in which you write your responses, then I can understand your dependance on debugging tools; however, even a lowly optimizing compiler normally picks up typographical errors like these.
Also, 'debuggability'; whose dictionary do you use?
What's the big deal, that you were notified via a forum post instead of direct mail?
No. The big deal is that the OP found no fault in posting this on a competitor's web forum. Had they used their own forum, or a neutrally operated one, that'd be an entirely different story.
"The big deal is that the OP found no fault in posting this on a competitor's web forum."
So, what's the big deal? Ultimately, whether either (the post itself or the OP's finding no fault posting) is a big deal or not will be determined by the thread's longevity.
Since we are talking about thread longevity, we make it a bit harder for Keil to remove the thread by challenging them a bit.
However, I do not think that it is ok to put a brochure in a competitors display rack - I find it unethical. Switching over to the electronic world doesn't make a difference.
An end user may post benchbark results on just about any open forum but a company or its employees should not - repeat not - post on a competitors forum unless it is deemed as required to defend themselves. Marketing own products on a competitors forum is a very big no-no. Big enough no-no that it would probably be best for Keil to leave this thread here. It shows a bit about ethics - if you are the "clyde" that posted the benchmark on the htsoft forum.
Also, 'debuggability'; whose dictionary do you use? none just tried to emphasize that I did, by no means, try, by 'debuggability' to indicate that I did not consider anything could not be debugged, but were referring to the ability to do efficient debugging (ICE).
Re cavalier manner ... typographical errors I am 'redecorating' and thus having the keyboard in my lap. also, a person should be able to 'read through' a few errors, I would never expect a computer to do so.
Re "amateur - hobbyist" I have produced working systems (not flawless in all cases, but with very few flaws in all cases) for more years than I am willing to admit, let me just say, some were before the microcontroller even existed.
To consider their code so superior that debuggability should not be a concern is a typical attitude of mateurs and hobbyists"
"I have produced working systems (not flawless in all cases, but with very few flaws in all cases) for more years than I am willing to admit, let me just say, some were before the microcontroller even existed."
Likewise, but I prefer to be more rounded when it comes to saying what is best - There is no perfect 'one fits all' solution and I would have expected someone with your claimed experience to appreciate that fact. I do not, in general, see the requirement to use ICE or even consider it when initially developing code. If there is an awkward problem, then I get it out - Else I prefer to study the code.
End.
I do not, in general, see the requirement to use ICE or even consider it when initially developing code. If there is an awkward problem, then I get it out and what are you going to do then if the optimizer has 'threaded' your code and you can not set a 'working' breakpoint.
1) there is no issue "when initially developing code", my sole point is optimizer 'threading' vs breakpoints 2) debugging what is not "an awkward problem" need no special measures 3) if your method is not so that "an awkward problem" can be debugged in a reasonable time, you will miss some deadlines.
I've seen too many well qualified graduates who insist on going straight for an ICE when they see a problem, rather than looking for logic errors in their code.
There's no real substitute for thinking and no excuse for laziness.
I've seen too many "well qualified" graduates who insist on going straight for an ICE when they see a problem, rather than looking for logic errors in their code. you forgot the quotes. I have seen many that were "well qualified" as far as academia goes, but not in the real world.
There's no real substitute for thinking and no excuse for laziness. I wholehardedly agree.
as to your first point, this is not ICE, but the scope. Many many moons ago, when scopes were slower, our best hardware guy and the undersigned 'universalist' started hunting a "once a day" hit. Although software was blamed (I had recoded the process to run 4 times faster) I had no doubt it was hardware. Finally we decided to use the "DNA scope" and looking through the schematic of a peripheral with as far as I remember ~400 TTL gates found a possible pulse that the scope could not see. We removed the possibility of that pulse and the unit worked. Approaching the original designer he stated "that can not happen" and pointed to typical values in the datasheet AAARGH!
now, as far as "going straight for an ICE", the ICE is never going to tell you what is wrong, but it can be very helpful in finding where to look. E.g. when 2 processors communicate (I am currently having 8 running in tandem) the ICE can tell you whether it is transmission or reception. My main 'beef' with those that are "going straight for an ICE" is not their debugging method, but those types often "insert a fix" instead of actually finding the root problem by "looking for logic errors in their code". I always stated that bugs should NEVER be fixed, they should be removed.
Mr. black, Normally I don't intervene in such discussions or should I say - exchanges, and Erik certainly does not require my help - but I find your comments to be simply childish:
Dude, if you cannot deal with Erik's arguments with effective replies, just don't bother...
"...if you cannot deal with Erik's arguments with effective replies, just don't bother..."
Trouble is, I don't see any valid argument, just a predisposition to coming up with a fixated priority with regards to optimization and debugging.
Just because he requires a limited form of optimization, does not mean to say that it is the right one.
It would appear that he is in a minority and it is not justifiable, else compiler writers would already support his requirements.
Going back to dedicated TTL circuit debugging really does not help to emphasise the argument. And yes, I have done my share of TTL circuit debugging, so I feel I know enought to say - It does not have a great deal of similarity to compiler optimizations.
Valid arguments - Phooey.
I think Per is right... if the post was indeed by THE Clyde Stubbs, Keil will most likely keep this thread.
The meat-n-bones of this thread, the Break-Point-Ability" debug versus "Professionals who are capable of writiting (sic) functional code" positions are indeed fun to read when you have Erik sparing with the Dark Side.
I would [Professionally] hire Mr. Erik Muland over Mr. Matte Black simply due to Erik's acknowledgment that debugging is more important than those last few percentage points of efficiency. To argue the reverse, shows that you are not really designing a system properly and/or do not have the authority to control the design. You are doing somebody else's bidding, and/or promoting an error in logic.
That "last few percent" requirement means that you have one of two problems you are trying to solve:
1) not enough CPU power/speed 2) not enough code-space
Both of these points means that the system was not designed properly to handle the expected loads. There are enough embedded processors out there with enough means (peripherals, memory capacity--on/off chip, MIPS, etc) to meet nearly all of the embedded needs out there. The rare few already have tried to meet such a challenge with limited resources. BUT SOMEBODY made the decision and its part of the engineering challenge.
A new design that requires the C Compiler (a tool) to be far more optimal than what the industry standard has achieved should be a warning about the selected CPU / speed used in the design. ESPECIALLY if the "debug-ability" and "uniquely break-point-able" is hindered. (The experienced professional knows that there is more to "break-points" than fixing poorly written code; such as unit level FQT. Mr. Black, in "professional-eze" FQT is known as "Formal Qualification Testing").
An old design, requiring the extra 35% / 17% improvement, is an indicator that a decision has been made to increase the expectations and requirement of the software system without providing the hardware to support the increased system needs. Usually this is done because of the well known and <sarcasm>Indisputable Management Fact that "Software is Free" and hardware is not</sarcasm>. Other times, it is because of a locked-down design. The point here is that somebody made the decision to boost the system without full knowledge of what that boost really means.
Don't get me wrong, having an increase in performance is great, but to sacrifice the debug-ability is a design error.
In academia (a term used to express the absence of business constraints or in some-cases, reality), the system is redesigned to meet the needs. In the real-world embedded engineering environment, the software must fit within the constraints. But somebody is still responsible for those constraints, and usually the factor inhibiting the speed/code-size redesign is management. They base it upon two things: 1) cost and 2) what sells. Your Engineering Ideals come in behind those.
Cost can be unit-cost to development costs, what sells might be total quantity or total quality. It doesn't matter, but the point is that the Engineer's Ideal of a perfectly designed system found in 'academia' is not part of the top criteria in the real business world. That is where this Hi-Tech product meets a need. Companies who want to use the same-old product hardware, yet take advantage of the "cost-less" software based improvements will be forcing their engineers to resort to the sacrifices between sufficient tools and sufficient brains.
Yes, highly optimized code can be debugged/validated, but it takes time. The engineer is responsible for the Time = Money factor. The more brains the engineer has, the less time it takes, thus the less money it costs.
Brad's Big Brain must get bigger and bigger as management DECIDES not to improve the hardware to meet the *new* needs. Seeking refuge in a particular tool is just a quick fix to this underlying problem.
The compiler efficiency improvement is highly welcomed, but the compiler must also meet the needs of the debugging/testing phases, and allow Brad's retarded brother (and we've seen them on this forum), to "get the job done."
I think Keil has done a good job in optimization, debugging and testing. So IMHO Mr. Matte Black's comment about "It would appear that he [erik] is in a minority and it is not justifiable, else compiler writers would already support his [erik] requirements." is in error. Keil does it.
I must also state that I have never used, or know the capabilities of, Hi-Tech products. They might be good too.
--Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
No new product should require optimum use of the hardware - unless it is a simple product built in large volumes, and it is known that the processor family has variants with larger flash and/or higher CPU frequency so that it is possible to just order a batch of bigger/faster processors to release a premium edition.
But since the lead times for new chips can be quite high, it is dangerous to use a fully slimmed solution even if a bigger/better chip is available. In case a little "gotcha" is found really late in the release cycle, the lead time to get the better chips may result in several months delayed release, probably costing very much more than the price difference between the two chip models (ignoring the goodwill loss from unhappy customers/resellers).
Another thing to think about is to have some extra speed/code space for instrumenting the code with some defensive bug catchers or field-debug abilities. It is really, really nice to have a bit of extra code that constantly checks important variables for invalid states, out-of-bounds parameters etc and potentially emits a printout, calls a catch-all breakpoint function, ... and then either resets the unit or repairs the parameter to a valid value.
I must admit that I'm not too fond of ICE debugging. I prefer to read code, and possibly add an extra printout to verify an assumption. But it really, really helps if the final product can keep some of the asserts etc. If nothing else, it means that I can get the same timing when debugging and when making a release.
ICE debugging also often requires extra connectors that are not mounted for the mass-produced units, and if visiting a customer on the field, it might sometimes be nice to be able to have a peek inside an installed unit - possibly connected to misbehaving external equipment that isn't available at my office. Black-box testing will only verify test cases I have managed to figure out - but the real world has a tendancy to create a number of semi-impossible situations too.
Having the ability to use spare interface bandwidth to dump memory contents (using CAN, UART, SPI, ...) from a live installation can be a real life-saver, and in some situations it might not even be really safe to connect an ICE. I don't know about too many opto-isolated ICE tools.
An ICE is a valuable tool, but it is one of many, and I prefer - not always a choice I'm in control off :) - to catch my bugs before I need to pick up my ICE. Some people may prefer to pick up the ICE earlier, some later, but it may also be a question about what type of software that is developed. A traditional problem for debuggers is to stop at a breakpoint in a multi-threaded application. What happens with the other threads if they are expected to serve hardware in real-time, and a producer or consumer suddenly doesn't produce/consume data at the speed required by the hardware? An ICE can't stop the PC from sending serial data into the target, and the hardware FIFO can only swallow so many characters before overflowing... With a bit of overcapacity in code size and speed, I may get my information without the breakpoint and without breaking the real-time requirements.
I totally agree!
And, you rightly pointed out the High-Volume 'problem' in optimal hardware design. Making the trade between a $9 CPU and a $12 CPU multiplied by 300,000 units becomes a $900,000 decision. But the 10,000 added service calls you will get also costs money. That is what the managers get paid to decide (BTW I've been there).
Having a debug [kernel] in the code (provided it is allowed) can greatly reduce the field problems, and as you've stated, it is amazing how many "semi-impossible situations" exist. The most cost effective weapon against this is "defensive programming." Defensive programming adds time and code, so the new design must account for that.
The debug capability when "visiting a customer on the field" can be vital for both the customer and your company. Most of the embedded products do are in the realm of low to medium quantity production. The ability to quickly turn one of your "quality based products" with a field error into the operational state can offset the black-eye you got in the first place when it failed. Without the tools to quickly diagnose and repair the problem, that black-eye can cost you not only money but your "quality" reputation.
Mr. Westermark is spot-on when it comes to being capable of quick (and easy) fixes to a system by designing in "overcapacity."
An ICE is preferable over a simulator, but is only NEEDED when you have a serious hardware/software error. The High Volume (>500,000 units) company I worked for did not have an ICE. At first I thought that was funny, but with competent engineers, it is rarely needed... actually we did have an ICE that was super old and lame so it collected dust. It was used once in three years I was there, and it was re-confirmed as "useless". In some cases, you simply cannot find the problem without an ICE. ...Bond-outs!
The over-reliance upon an ICE may inhibit the developer's clarity of thought. Kind of like the Flash versus UV-EPROM code memories did/do/does. When you had only five or ten EPROMS erased, and each erase cycle took four+ hours, you didn't willie-nillie try this-n-that to see if it this modification now works. You had to THINK about it first.
With that in mind, the developer who over-relies upon an ICE might not realize the importance of having the debug capabilities that Per Westermark is advocating. By thinking ICE-less for awhile, you'll learn what kinds of field testing demands there are out there, and how to code-up some really worth-while debugging techniques/services. Like Per said, having an ICE is not always a choice. Using your brain is... keep it sharpened.
An awful lot of blurb produced by simple mickey-take!
I don't feel offended that Erik would be preferred at the professional level. He obviously knows how to use an ICE and (apparently) I do not??!!
In general, the comments of Per and Vince are ones I would agree with.
Anyway, WTF - I am a Keil user, have been for many years and still believe that it is the best tool for the '51 developments that I have been involved it.
I don't think anyone is contradicting you. Erik is also using the Keil compiler. Most probably because he wants to, and not because he is forced to.
I have currently done a bit of Pic programming, using the Microchip compiler since Keil doesn't support that architecture. The "nice" compiler doesn't even support the full grammar... And at least one construct where it failed to produce a binary - currently unknown if the error is in the compiler or the linker.
We developers do like good tools - it's just that our definition of good (and what kind of tools we prefer) varies with previous experiences and with type of products we develop. A one-man project has different requirements than a 100-man project. A life-sustaining equipment has different requirements than a set-top box. The people developing the set-top box would never manage to finish their product if they had to guarantee the reliability according to the standards needed of a pace-maker.
Because of the different needs, debates about A _or_ B often results in heated debates that doesn't lead to one or he other side switching opinion and most of the time without making any readers switch position either. Debates that focus on comparing two alternatives have a lot better chance to make people listen and try the alternative, since the debate isn't about right or wrong but about how one tool can complement another, or (if the discussion isn't between combatants) leads to information how users of tool A (who can't afford tool B) can find reasonable workarounds to still be productive.
Per,
I too have recently used the Microchip C compiler.
The Keil C51 compiler certainly has it's quirks; but compared to the Microchip "nice" compiler and it's quirks, my opinion is that the Keil is more predictable in it's output and therefore preferable.
Now I have had the opportunity to migrate across to the Keil ARM compiler. One comment from me with regards to the compiler - Absolutely lovely!
I am a Keil user, have been for many years and still believe that it is the best tool for the '51 developments that I have been involved it. so do I. But is there anything wrong with wanting 'the best' better. I have no problems whatsoever with the optimizations provided by Keil (they are optional and I do not use them) I just want another level where some are implemented and the "debuggability killers" are not.
Someone above talked about "debugging by printf". That will work in many cases, but in "my world" where everything hangs on nanoseconds, inserting a time consuming call can make the "debug aid" a bug.
View all questions in Keil forum