There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.
More details here: www.htsoft.com/.../silabs8051beta
1) No reference to "best choice" of optimizing flags for the two compilers.
2) Too small application to show difference in code size - remember that size of RTL affects small projects more.
3) How much of the code optimization results in speed changes for other applications? The Dhrystone isn't exactly relevant for an 8-bit microcontroller with 1-bit instructions...
They really have to produce more information before making any claims in one direction or the other. Compiling an application that makes use of a lot of one-bit variables and compare between two compilers, and then compila a program using a lot of 16-bit orh 32-bit variables and you will see that the comparisons will vary a lot. Code size and speed can only be deduced from a significantly large code base of very varying - but applicable - code.
I am sure, anyone with enough experience can make a compiler that will make faster and more compact code. who gives a hoot if the result is not 'uniquely breakpointable'. Th result from Keil can be much better if you use a higher optimization level, buy who in his/her right mind would use code the emulator can not 'uniquely' breakpoin on. Sorry, I have now offended a lot of people, but debuggability is far more important than those last few percent efficiency. What really offend me is that nobody (yet) has made a compiler/linker/optimizer that fully maintain program flow and is optimized in all other respects.
Erik
PS Clyde, do you really think it is apprpiate to promote your stuff on a website run by a competitor?
Hi Vince, thanks for the reply. I can tell you (hopefully without pushing anyone's buttons :-) that the results I mentioned in the post that started this thread do not depend on any inlining or reverse inlining - i.e. the code IS uniquely breakpointable.
Those optimizations are on the agenda, but they're not dependent on OCG, which is the really new technology that has delivered the improved performance.
The "obfuscated data store" question depends on the capabilities of the debugger (and debug format). The compiler does provide full information as to where variables are stored (which can change between memory and registers) at any given point in the program, but many debuggers and debug formats do not have the capability to make use of this. Elf/Dwarf is AFAIK the most capable format in this regard.
I also appreciate your comments on selectability of optimizations. This is precisely the kind of feedback I was looking for in starting this thread. OCG is a powerful technology, but the final objective is to deliver to engineers what they need - and this thread clearly illustrates that different people have different needs!
Clyde
The major reason why this would happen is that the compiler or linker has performed what is known as "procedural abstraction" or "reverse inlining", i.e. a section of code common to two or more places in the program has been moved aside and replaced with calls to the common code section.
Is that an accurate summary of your concerns?
Hi Clyde,
remember me from the beta days of tha XA compiler?
Those optimizations are on the agenda, but they're not dependent on OCG, which is the really new technology that has delivered the improved performance to repeat my point ANY optimizations is desirable to SOME, the issue is an extremely flexible 'menu' of which you want to suit YOUR environment.
Erik, I remember the XA, but not you, I'm afraid. But then I think I've forgotten more than I know, so don't take it personally :-)
I do not think that it is ok to put a brochure in a competitors display rack - I find it unethical. Switching over to the electronic world doesn't make a difference.
The reason that HT has posted their advertisement here simply has to do with traffic and volume. The Keil forum has it and the HT forum does not. Even so, I'm not sure that it makes much sense to remove it.
Every few years, another Keil competitor has a "new" compiler release that generates smaller and faster code. At Keil, we welcome this kind of innovation and embrace how it helps to expand and grow the 8051 microcontroller marketplace.
Jon
Per Westermark just made a clear and common 'error' that usually is found at the assembly level.
I'm not sure that I see that as an example of something that might need to be debugged at assembly level. It's a straightforward 'C' level error caused by a failure to understand either side-effects, macro expansion rules or both. As such I wouldn't expect a competent 'C' programmer to make the error, never mind be unable to spot it at the 'C' level.
I would be interested to see a concrete example of an error that a competent 'C' programmer might make that would not be more easily spotted by reviewing the 'C' code rather than stepping through compiler generated assembly code.
I have to agree with Jack, although I think that making mistakes has little to do with what he considers "competence". When I have trouble with macros I usually look at the preprocessor output, I don't bother to debug them in assembly.
Here's one. Taken from real life, slightly simplified.
unsigned int i; unsigned int some_array[12]; ... for(i = 8; i < 12; i++) { some_array[i] = 0xFF; }
After the loop, some_array[9...11] were found to be unmodified. No other tasks or ISRs are access some_array at the same time. Did you find the error in the C code ?
I got a bit of code written by another developer, and containing a library.
What wasn't obvious whas that the nice guy had decided to create a function-looking #define without the very common curtesy to select all capitals.
Would you suspect the following code to step the pointer twice?
while (*msg) put_data(*msg++);
By your implication, I was incompetent for assuming that the documented "function" actually was a function. Sumething documented as a function should really behave as a function, don't you think?
Since I assumed it to be a function (as the documentation claimed), I saw no need to look at any preprocessor output. However, single-stepping through the code with mixxed assembler/C made it obvious that the function call did not do what I expected, and why the extra increment managed to step past the termination character. If msg had had multiple characters, I might have noticed that only characters at even positions was emitted, but in this case my only character was emitted (as expected), but then followed by a very large number of random junk.
Life is a lot easier when you have written every single line of the code - as soon as someone else have been involved, you have to assume that they have followed the traditional best-practices or you will never manage to get a final product.
If what you are saying is true, then the compiler that translated that fragment of code is broken. Use a different compiler - one that you can trust.
Did you find the error in the C code ?
Given that snippet in isolation I can see no error. Please enlighten me.
There isn't one (the snippet was all that was necessary to reproduce the error, without any ISRs or multitasking). The programmer made one of two possible errors: Either blindly trusting the compiler to generate correct assembly code, or not religiously sifting through the compilers errata sheets to check for this situation.
Looking at the assembly code, however, it became quite clear that the compiler generated a completely bogus target address for the looping command used in the for-loop, which caused the microcontroller to jump out of the loop after the first iteration.
Not calling any names here, but that was the compiler supplied by the manufacturer of the chip, with no alternative compilers available. When presented with the C code and the corresponding assembly, their tech support commented "We do not think this is a compiler bug.". I've not contacted them again after this. Most of the program was written in assembly, anyway, which was probably a good thing.
If what you are saying is true, then the compiler that translated that fragment of code is broken.
Why do you think that?
Not calling any names here, but that was the compiler supplied by the manufacturer of the chip, [...]
I don't know why I so suddenly start to think about Microchip...
The programmer made one of two possible errors: Either blindly trusting the compiler to generate correct assembly code, or not religiously sifting through the compilers errata sheets to check for this situation.
You've missed the point. I was after an example of the sort of error being discussed - a 'C' coding mistake caused by faulty logic or faulty implementation of correct logic. It's a given that one would have to inspect the assembly output if there is in fact no error in the 'C' code.
I was after an example of the sort of error being discussed - a 'C' coding mistake caused by faulty logic or faulty implementation of correct logic.
Well, any case of lawyer code (e.g. use of code with effects not specified by the C language standard) would suffice there. Even the most competent C programmer cannot tell whether the code will do what it is supposed to do without either knowing the implementation details of the compiler or looking at the generated assembly.
(And no, I don't consider knowing by heart what
some_function(++a, ++a);
does on seven different compilers to be part of being a competent C programmer. A competent C programmer will know that this is heavily compiler dependent and avoid such expressions whenever possible. There is no way of knowing whether this will work as intended by just looking at the C code)
Regarding the example:
Who really writes code like this? Are the (questionable) optimizations of any side effects from such a line ever worth it?
In our case, all people MUST undergo an intial period of training to ensure that the prescribed development rules are understood before they are let loose at writing code. Hence expressions like the above, and any resultant assumptions are avoided.
Simple.
Who really writes code like this?
People who don't know better (and you might have to debug their code at some point), people who don't care and people who are actively malicious.
Are the (questionable) optimizations of any side effects from such a line ever worth it?
Some people may think that writing a program with as few keystrokes as possible is a worthwhile goal.
Granted, the example was blaringly obvious and should make anyone halfway familiar with C cringe. Any compiler with half a brain should emit a warning. However, MS VC++ doesn't seem to care about a = a++; ... other compilers I use do find this worth a warning.
"People who don't know better (and you might have to debug their code at some point), people who don't care and people who are actively malicious."
I take your point on that one. I have come across similar dubious practice code in legacy projects.
Not so long ago I was scanning over some code of a (supposedly senior) team member. There was a block of believable code, in a released project, that had a comment just above it stating:
/* THIS CODE DOES NOT WORK */
Not too surprisingly, the team member wasn't part of my team for much longer!
Not too surprisingly, the team member wasn't part of my team for much longer!<p>
Well, the question is: If the code (obviously) didn't work, why wasn't this caught during testing ? Or was the comment outdated and the code correct ?
No, not simple. Besides assuming that you do manage to teach them all to behave, you also assume that you really are in control of all paths of source code onto your table.
Did you see my example? The library in question wasn't written inhouse, but because of policy reasons (sellers like partnerships, since it looks so nice on the web page...) you sometime have to integrate code you have suddenly got in your knee.
Sometimes management buy products that you may have to take care of. Sometimes your products needs to be integrated with a customers product. Sometimes, someone decides to buy a magic library that will greatly decrease the development time of a new feature. Many ways to get new and strange code inside the house. Not all written by really competent developers.
Well, any case of lawyer code (e.g. use of code with effects not specified by the C language standard) would suffice there.
A competent 'C' programmer wouldn't do that.
What this boils down to for me is this:
If you find yourself reaching for the ICE or stepping through compiler output on a regular basis you are either working with 3rd party junk rather than decent development tools or libraries, or the code you have written is junk. The 'have a go' programmers who 'don't care a hoot' about the standard find themselves unable to get anything to work without constant debugging which they are incapable of doing at source level. Why? Because they cannot tell whether the code they have written *should* work or not. They find out how it *actually* works by experimenting with the compiler, rather than just reading the damn manual.
This is why the world is full of unreliable, unmaintainable junk.
If you find yourself reaching for the ICE or stepping through compiler output on a regular basis you are either working with 3rd party junk rather than decent development tools or libraries, or the code you have written is junk.
I think I have to agree with that. Most code either works on the first run, or reading the code is enough to see what ails it. A bit of guard code can help in case I have made an incorrect assumption about the value range of the input, or in case I'm inserting the new code in already broken code.
I think I have to agree with that. Most code either works on the first run, or reading the code is enough to see what ails it Try, for instance to write an interface to a FTDI vinculum and debug it by the above method, you will die before the program runs.
there are beautiful debugging theories based on all information given is complete and correct just one comment male cow manure.
You are always so poetic.
...unable to get anything to work without constant debugging which they are incapable of doing at source level.
I agree. Among other sins, this approach yields code that is less maintainable and probably less portable.
Among other sins, this approach yields code that is less maintainable and probably less portable. your debugiing method has NOTHING to do with code being "less maintainable". What makes code amongst other things "less maintainable" is 'fixing' bugs instead of removing them.
Erik, Were you not, at least once in your (presumably) long career, been tempted to win a few nanos here and there by using a platform specific, nasty trick? That's what this is all about. Jack belong to the school of standards and "working by the book". you are the guy that wants to get the work done without loosing a single nanosecond. I do admire your approach, but what I have seen so far persuaded me to join the camp of people who like to hide behind the standard. Maybe it is because I live in the universe of milliseconds critical applications, not less (for now).
Were you not, at least once in your (presumably) long career, been tempted to win a few nanos here and there by using a platform specific, nasty trick? the only "platform specific tricks" I have implemented are 'tricks' specific to the particular derivative I am working with. If a given derivative as far as platform specific has e.g. multiple datapointers and the compiler does not use them, I will, in a time critical application, go straight to assembler. And there I use very platform(derivative) specific trick.
If by platformk specific you refer to the (re the '51)stupid 'portability' (who has ever heard of a 'small embedded" project being 'ported') I confess that if the Keil compiler aloow me to specify DATA I do it.
View all questions in Keil forum