There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.
More details here: www.htsoft.com/.../silabs8051beta
1) No reference to "best choice" of optimizing flags for the two compilers.
2) Too small application to show difference in code size - remember that size of RTL affects small projects more.
3) How much of the code optimization results in speed changes for other applications? The Dhrystone isn't exactly relevant for an 8-bit microcontroller with 1-bit instructions...
They really have to produce more information before making any claims in one direction or the other. Compiling an application that makes use of a lot of one-bit variables and compare between two compilers, and then compila a program using a lot of 16-bit orh 32-bit variables and you will see that the comparisons will vary a lot. Code size and speed can only be deduced from a significantly large code base of very varying - but applicable - code.
I am sure, anyone with enough experience can make a compiler that will make faster and more compact code. who gives a hoot if the result is not 'uniquely breakpointable'. Th result from Keil can be much better if you use a higher optimization level, buy who in his/her right mind would use code the emulator can not 'uniquely' breakpoin on. Sorry, I have now offended a lot of people, but debuggability is far more important than those last few percent efficiency. What really offend me is that nobody (yet) has made a compiler/linker/optimizer that fully maintain program flow and is optimized in all other respects.
Erik
PS Clyde, do you really think it is apprpiate to promote your stuff on a website run by a competitor?
Per,
I too have recently used the Microchip C compiler.
The Keil C51 compiler certainly has it's quirks; but compared to the Microchip "nice" compiler and it's quirks, my opinion is that the Keil is more predictable in it's output and therefore preferable.
Now I have had the opportunity to migrate across to the Keil ARM compiler. One comment from me with regards to the compiler - Absolutely lovely!
Going back to dedicated TTL circuit debugging really does not help to emphasise the argument. And yes, I have done my share of TTL circuit debugging, so I feel I know enought to say - It does not have a great deal of similarity to compiler optimizations. Well, if you reread the post you will see that the above was in support of YOUR argumnent that thinking is required. That thinking is required, of ourse has nothing to do with optimization, but what we are discussing is debugging and the hurdles the 'threading' optimization put to that
I am a Keil user, have been for many years and still believe that it is the best tool for the '51 developments that I have been involved it. so do I. But is there anything wrong with wanting 'the best' better. I have no problems whatsoever with the optimizations provided by Keil (they are optional and I do not use them) I just want another level where some are implemented and the "debuggability killers" are not.
Someone above talked about "debugging by printf". That will work in many cases, but in "my world" where everything hangs on nanoseconds, inserting a time consuming call can make the "debug aid" a bug.
Erik, I've been following the various threads within this thread with some interest. I wasn't entirely sure what you meant by "uniquely breakpointable" at first, but having done some homework, I now believe that what you are referring to is the problem with some compilers that setting a breakpoint on one line of code may cause the debugger to stop at that breakpoint even though the execution sequence is within a different function or at least section of code.
The major reason why this would happen is that the compiler or linker has performed what is known as "procedural abstraction" or "reverse inlining", i.e. a section of code common to two or more places in the program has been moved aside and replaced with calls to the common code section.
Is that an accurate summary of your concerns?
If I may answer for erik until he does so himself, "Yes." You are on the right track, as that is the most common break-point-able snafu when debugging. Others include an obfuscated method of data store handling (especially passing and returning to and from unctions) is also a concern.
The compiler that you advocate with project-wide "Omniscient Code Generation" is a great and welcomed addition to the optimization processes. But my (our) worry is that the code becomes difficult to track or trace the flow for "debugging" purposes. Much of this thread has to do with the importance of this ability versus the gained benefits of the "tightest-code ever" trade off. Sometimes we are in control of the *need* for it, and sometimes we are not.
My concern is equally shared between development debugging phases, and code validation and qualification phases. The ability to break before a function, load the test-case passed parameters into the appropriate data-stores, and then execute the code unit (function), and finally extract the results is a typical [automated] validation process. This allows the test benches to stress the ranges and conditions of the unit under test and extract the output for post-analysis.
When the break-points and data-stores are not cleanly delineated, the the unit-level testing (or "debug") becomes a bit hectic... and possibly impossible. Even the test-benches need qualification, so having to go through hoops in order to ensure a valid test takes not only time but also extensive and highly tailored documentation; not to mention the explanation to those who are in QA and might not be as savvy to the complexities of the system.
Even if the embedded engineer doesn't do such formal qualification testing methods, the "debugging" process is basically the same as the FQT. You break at a known point (in "C") and track/trace the flow and data through the unit under test (the function or suspected area of error), and figure out why your logic/implementation is wrong.
Typically this is done at the assembly level and not simply at the "C" level since the 'competent' can easily see the "C" level errors quickly. It is our 'advanced' errors in thought that we spend all of our time "debugging" the assembly.
Being able to use a compiler optimization switch to enable/disable such delineation would ease some of these issues. (e.g. Keil's optimization level is selectable, but I'm sure most of the more serious users would prefer the pick-n-choose which types of optimizations take place and not just "Level-9 and all of the levels before it." But we also realize the interdependencies and the overall difficulty in picking and choosing the types of optimizations). Having the OGC do its thing with the caveat that unique entry/exit nodes are to be kept in order to allow break-points would be a good start.
I like the OCG method, but I am concerned over the sacrifice needed in the ease of development in order to eek out a few percentage points of margin on a design.
My personal opinion is that the OCG method will eventually become the new standard for embedded system compilers, and the leading products will have paid a special effort to address the debug, development, and validation issues.
--Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
I'm not sure I have understood this. You appear to be saying that given an error in a 'C' program that is caused by:
a) Faulty logic or b) Faulty implementation of correct logic
you might find yourself debugging at assembly level to spot the error?
If you use #define expressions you may get into troubles with multiple increments/decrements that isn't visible when you single-step the C source. You either have to run the pre-processor to get the code expansion, or single-step the assembler code.
C++ also have a bit of magic that can require assembler debugging unless you already "know" where you should put a breakpoint to catch the next step.
Yes. that is what I am saying. Most errors are either faulty logic or the faulty implementation of correct logic... and of course the faulty implementation of fautly logic.
At the "C" level, these can be found 'easily' since your emulator/simulator can show you the logical flow as you single-step through the high-level "C" code, and you watch the data-stores change accordingly. But since the 'hard' problems take a much larger percent of our debugging time, we usually are single-stepping through the underlying assembly code that supports each of the "C" statements in order to find our mistakes. Per Westermark just made a clear and common 'error' that usually is found at the assembly level.
The reason I elaborate on this distinction has to do with the highly-optimized code that causes the underlying assembly language to be seemingly scattered and dis-jointed due to the use of shared code segments and other "odd looking" (but valid) code that the compiler may generate.
Hi Vince, thanks for the reply. I can tell you (hopefully without pushing anyone's buttons :-) that the results I mentioned in the post that started this thread do not depend on any inlining or reverse inlining - i.e. the code IS uniquely breakpointable.
Those optimizations are on the agenda, but they're not dependent on OCG, which is the really new technology that has delivered the improved performance.
The "obfuscated data store" question depends on the capabilities of the debugger (and debug format). The compiler does provide full information as to where variables are stored (which can change between memory and registers) at any given point in the program, but many debuggers and debug formats do not have the capability to make use of this. Elf/Dwarf is AFAIK the most capable format in this regard.
I also appreciate your comments on selectability of optimizations. This is precisely the kind of feedback I was looking for in starting this thread. OCG is a powerful technology, but the final objective is to deliver to engineers what they need - and this thread clearly illustrates that different people have different needs!
Clyde
Hi Clyde,
remember me from the beta days of tha XA compiler?
Those optimizations are on the agenda, but they're not dependent on OCG, which is the really new technology that has delivered the improved performance to repeat my point ANY optimizations is desirable to SOME, the issue is an extremely flexible 'menu' of which you want to suit YOUR environment.
Erik, I remember the XA, but not you, I'm afraid. But then I think I've forgotten more than I know, so don't take it personally :-)
I do not think that it is ok to put a brochure in a competitors display rack - I find it unethical. Switching over to the electronic world doesn't make a difference.
The reason that HT has posted their advertisement here simply has to do with traffic and volume. The Keil forum has it and the HT forum does not. Even so, I'm not sure that it makes much sense to remove it.
Every few years, another Keil competitor has a "new" compiler release that generates smaller and faster code. At Keil, we welcome this kind of innovation and embrace how it helps to expand and grow the 8051 microcontroller marketplace.
Jon
Per Westermark just made a clear and common 'error' that usually is found at the assembly level.
I'm not sure that I see that as an example of something that might need to be debugged at assembly level. It's a straightforward 'C' level error caused by a failure to understand either side-effects, macro expansion rules or both. As such I wouldn't expect a competent 'C' programmer to make the error, never mind be unable to spot it at the 'C' level.
I would be interested to see a concrete example of an error that a competent 'C' programmer might make that would not be more easily spotted by reviewing the 'C' code rather than stepping through compiler generated assembly code.
I have to agree with Jack, although I think that making mistakes has little to do with what he considers "competence". When I have trouble with macros I usually look at the preprocessor output, I don't bother to debug them in assembly.
View all questions in Keil forum