There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.
More details here: www.htsoft.com/.../silabs8051beta
1) No reference to "best choice" of optimizing flags for the two compilers.
2) Too small application to show difference in code size - remember that size of RTL affects small projects more.
3) How much of the code optimization results in speed changes for other applications? The Dhrystone isn't exactly relevant for an 8-bit microcontroller with 1-bit instructions...
They really have to produce more information before making any claims in one direction or the other. Compiling an application that makes use of a lot of one-bit variables and compare between two compilers, and then compila a program using a lot of 16-bit orh 32-bit variables and you will see that the comparisons will vary a lot. Code size and speed can only be deduced from a significantly large code base of very varying - but applicable - code.
I am sure, anyone with enough experience can make a compiler that will make faster and more compact code. who gives a hoot if the result is not 'uniquely breakpointable'. Th result from Keil can be much better if you use a higher optimization level, buy who in his/her right mind would use code the emulator can not 'uniquely' breakpoin on. Sorry, I have now offended a lot of people, but debuggability is far more important than those last few percent efficiency. What really offend me is that nobody (yet) has made a compiler/linker/optimizer that fully maintain program flow and is optimized in all other respects.
Erik
PS Clyde, do you really think it is apprpiate to promote your stuff on a website run by a competitor?
Your marketing strategy is flawed and, if I may say, rude. Whether there are limitations on commercials here or not, I call upon Keil to remove your post effective immediately.
"but who in his/her right mind would use code the emulator can not 'uniquely' breakpoin on"
Important for people who are incapable of writiting functional code in the first instance I would imagine!!??
and totally unimportant for those that do not mind occasionally ending up in a debugging nightmare.
I recall an instance where it was finally found out that the reason optimized (non-debuggable) code did not work and non-optimized (debuggable) code worked was a glitch at the other end of the cable. I would immediately hire anyone that could prove that they would, without an ICE, find any glitch (hardware created or not) in less than 15 minutes.
Of course people like matt black would never have that problem, their code is totally independednt of undocumented hardware glitches
"Of course people like matt black would never have that problem, their code is totally independednt of undocumented hardware glitches"
I have previously had a rule of not answering the posts of hobbyists and amateurs.
The response Erik given is the reason for that rule. If only I'd not been drawn in by his attempt to inflame!
Maybe Erik should consider going to assembler only projects, then he would be assured that there is no possibility of the compiler corrupting his working C code with optimization.
It is common practice for vendors benchmark their products against themselves and their competitors:
http://www.keil.com/benchmarks/
I have received benchmark results from IAR compared with other vendors in the past.
I have received benchmark results from Keil compared to other vendors in the past.
What's the big deal, that you were notified via a forum post instead of direct mail?
I have previously had a rule of not answering the posts of hobbyists and amateurs. only "hobbyists and amateurs" would write non-debuggable code and since i do not, you did not bresk your rule.
the compiler corrupting his working C code with optimization not a word about 'corrupting' just debuggability. The ability to set a breakpoint in your ICE that will only 'hit' at the actual place, not all places 'theraded' together is essential for fast debugging. Anything can be debugged, that was never the point, but since all research show that 50 - 80 % of the time developing a project is spent debugging, fast debugging is essential.
dictionary: "debuggability" the ability to debug in a fast and nefficient way.
bresk 'theraded' nefficient
If you write code with the same cavalier manner in which you write your responses, then I can understand your dependance on debugging tools; however, even a lowly optimizing compiler normally picks up typographical errors like these.
Also, 'debuggability'; whose dictionary do you use?
No. The big deal is that the OP found no fault in posting this on a competitor's web forum. Had they used their own forum, or a neutrally operated one, that'd be an entirely different story.
"The big deal is that the OP found no fault in posting this on a competitor's web forum."
So, what's the big deal? Ultimately, whether either (the post itself or the OP's finding no fault posting) is a big deal or not will be determined by the thread's longevity.
Since we are talking about thread longevity, we make it a bit harder for Keil to remove the thread by challenging them a bit.
However, I do not think that it is ok to put a brochure in a competitors display rack - I find it unethical. Switching over to the electronic world doesn't make a difference.
An end user may post benchbark results on just about any open forum but a company or its employees should not - repeat not - post on a competitors forum unless it is deemed as required to defend themselves. Marketing own products on a competitors forum is a very big no-no. Big enough no-no that it would probably be best for Keil to leave this thread here. It shows a bit about ethics - if you are the "clyde" that posted the benchmark on the htsoft forum.
Also, 'debuggability'; whose dictionary do you use? none just tried to emphasize that I did, by no means, try, by 'debuggability' to indicate that I did not consider anything could not be debugged, but were referring to the ability to do efficient debugging (ICE).
Re cavalier manner ... typographical errors I am 'redecorating' and thus having the keyboard in my lap. also, a person should be able to 'read through' a few errors, I would never expect a computer to do so.
Re "amateur - hobbyist" I have produced working systems (not flawless in all cases, but with very few flaws in all cases) for more years than I am willing to admit, let me just say, some were before the microcontroller even existed.
To consider their code so superior that debuggability should not be a concern is a typical attitude of mateurs and hobbyists"
"I have produced working systems (not flawless in all cases, but with very few flaws in all cases) for more years than I am willing to admit, let me just say, some were before the microcontroller even existed."
Likewise, but I prefer to be more rounded when it comes to saying what is best - There is no perfect 'one fits all' solution and I would have expected someone with your claimed experience to appreciate that fact. I do not, in general, see the requirement to use ICE or even consider it when initially developing code. If there is an awkward problem, then I get it out - Else I prefer to study the code.
End.
I do not, in general, see the requirement to use ICE or even consider it when initially developing code. If there is an awkward problem, then I get it out and what are you going to do then if the optimizer has 'threaded' your code and you can not set a 'working' breakpoint.
1) there is no issue "when initially developing code", my sole point is optimizer 'threading' vs breakpoints 2) debugging what is not "an awkward problem" need no special measures 3) if your method is not so that "an awkward problem" can be debugged in a reasonable time, you will miss some deadlines.
I've seen too many well qualified graduates who insist on going straight for an ICE when they see a problem, rather than looking for logic errors in their code.
There's no real substitute for thinking and no excuse for laziness.
I've seen too many "well qualified" graduates who insist on going straight for an ICE when they see a problem, rather than looking for logic errors in their code. you forgot the quotes. I have seen many that were "well qualified" as far as academia goes, but not in the real world.
There's no real substitute for thinking and no excuse for laziness. I wholehardedly agree.
as to your first point, this is not ICE, but the scope. Many many moons ago, when scopes were slower, our best hardware guy and the undersigned 'universalist' started hunting a "once a day" hit. Although software was blamed (I had recoded the process to run 4 times faster) I had no doubt it was hardware. Finally we decided to use the "DNA scope" and looking through the schematic of a peripheral with as far as I remember ~400 TTL gates found a possible pulse that the scope could not see. We removed the possibility of that pulse and the unit worked. Approaching the original designer he stated "that can not happen" and pointed to typical values in the datasheet AAARGH!
now, as far as "going straight for an ICE", the ICE is never going to tell you what is wrong, but it can be very helpful in finding where to look. E.g. when 2 processors communicate (I am currently having 8 running in tandem) the ICE can tell you whether it is transmission or reception. My main 'beef' with those that are "going straight for an ICE" is not their debugging method, but those types often "insert a fix" instead of actually finding the root problem by "looking for logic errors in their code". I always stated that bugs should NEVER be fixed, they should be removed.
Mr. black, Normally I don't intervene in such discussions or should I say - exchanges, and Erik certainly does not require my help - but I find your comments to be simply childish:
Dude, if you cannot deal with Erik's arguments with effective replies, just don't bother...
"...if you cannot deal with Erik's arguments with effective replies, just don't bother..."
Trouble is, I don't see any valid argument, just a predisposition to coming up with a fixated priority with regards to optimization and debugging.
Just because he requires a limited form of optimization, does not mean to say that it is the right one.
It would appear that he is in a minority and it is not justifiable, else compiler writers would already support his requirements.
Going back to dedicated TTL circuit debugging really does not help to emphasise the argument. And yes, I have done my share of TTL circuit debugging, so I feel I know enought to say - It does not have a great deal of similarity to compiler optimizations.
Valid arguments - Phooey.
Going back to dedicated TTL circuit debugging really does not help to emphasise the argument. And yes, I have done my share of TTL circuit debugging, so I feel I know enought to say - It does not have a great deal of similarity to compiler optimizations. Well, if you reread the post you will see that the above was in support of YOUR argumnent that thinking is required. That thinking is required, of ourse has nothing to do with optimization, but what we are discussing is debugging and the hurdles the 'threading' optimization put to that
View all questions in Keil forum