There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.
More details here: www.htsoft.com/.../silabs8051beta
Per Westermark just made a clear and common 'error' that usually is found at the assembly level.
I'm not sure that I see that as an example of something that might need to be debugged at assembly level. It's a straightforward 'C' level error caused by a failure to understand either side-effects, macro expansion rules or both. As such I wouldn't expect a competent 'C' programmer to make the error, never mind be unable to spot it at the 'C' level.
I would be interested to see a concrete example of an error that a competent 'C' programmer might make that would not be more easily spotted by reviewing the 'C' code rather than stepping through compiler generated assembly code.
I do not think that it is ok to put a brochure in a competitors display rack - I find it unethical. Switching over to the electronic world doesn't make a difference.
The reason that HT has posted their advertisement here simply has to do with traffic and volume. The Keil forum has it and the HT forum does not. Even so, I'm not sure that it makes much sense to remove it.
Every few years, another Keil competitor has a "new" compiler release that generates smaller and faster code. At Keil, we welcome this kind of innovation and embrace how it helps to expand and grow the 8051 microcontroller marketplace.
Jon
Erik, I remember the XA, but not you, I'm afraid. But then I think I've forgotten more than I know, so don't take it personally :-)
Hi Clyde,
remember me from the beta days of tha XA compiler?
Those optimizations are on the agenda, but they're not dependent on OCG, which is the really new technology that has delivered the improved performance to repeat my point ANY optimizations is desirable to SOME, the issue is an extremely flexible 'menu' of which you want to suit YOUR environment.
Erik
The major reason why this would happen is that the compiler or linker has performed what is known as "procedural abstraction" or "reverse inlining", i.e. a section of code common to two or more places in the program has been moved aside and replaced with calls to the common code section.
Is that an accurate summary of your concerns?
Hi Vince, thanks for the reply. I can tell you (hopefully without pushing anyone's buttons :-) that the results I mentioned in the post that started this thread do not depend on any inlining or reverse inlining - i.e. the code IS uniquely breakpointable.
Those optimizations are on the agenda, but they're not dependent on OCG, which is the really new technology that has delivered the improved performance.
The "obfuscated data store" question depends on the capabilities of the debugger (and debug format). The compiler does provide full information as to where variables are stored (which can change between memory and registers) at any given point in the program, but many debuggers and debug formats do not have the capability to make use of this. Elf/Dwarf is AFAIK the most capable format in this regard.
I also appreciate your comments on selectability of optimizations. This is precisely the kind of feedback I was looking for in starting this thread. OCG is a powerful technology, but the final objective is to deliver to engineers what they need - and this thread clearly illustrates that different people have different needs!
Clyde
Yes. that is what I am saying. Most errors are either faulty logic or the faulty implementation of correct logic... and of course the faulty implementation of fautly logic.
At the "C" level, these can be found 'easily' since your emulator/simulator can show you the logical flow as you single-step through the high-level "C" code, and you watch the data-stores change accordingly. But since the 'hard' problems take a much larger percent of our debugging time, we usually are single-stepping through the underlying assembly code that supports each of the "C" statements in order to find our mistakes. Per Westermark just made a clear and common 'error' that usually is found at the assembly level.
The reason I elaborate on this distinction has to do with the highly-optimized code that causes the underlying assembly language to be seemingly scattered and dis-jointed due to the use of shared code segments and other "odd looking" (but valid) code that the compiler may generate.
--Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
If you use #define expressions you may get into troubles with multiple increments/decrements that isn't visible when you single-step the C source. You either have to run the pre-processor to get the code expansion, or single-step the assembler code.
C++ also have a bit of magic that can require assembler debugging unless you already "know" where you should put a breakpoint to catch the next step.
Even if the embedded engineer doesn't do such formal qualification testing methods, the "debugging" process is basically the same as the FQT. You break at a known point (in "C") and track/trace the flow and data through the unit under test (the function or suspected area of error), and figure out why your logic/implementation is wrong.
Typically this is done at the assembly level and not simply at the "C" level since the 'competent' can easily see the "C" level errors quickly. It is our 'advanced' errors in thought that we spend all of our time "debugging" the assembly.
I'm not sure I have understood this. You appear to be saying that given an error in a 'C' program that is caused by:
a) Faulty logic or b) Faulty implementation of correct logic
you might find yourself debugging at assembly level to spot the error?
If I may answer for erik until he does so himself, "Yes." You are on the right track, as that is the most common break-point-able snafu when debugging. Others include an obfuscated method of data store handling (especially passing and returning to and from unctions) is also a concern.
The compiler that you advocate with project-wide "Omniscient Code Generation" is a great and welcomed addition to the optimization processes. But my (our) worry is that the code becomes difficult to track or trace the flow for "debugging" purposes. Much of this thread has to do with the importance of this ability versus the gained benefits of the "tightest-code ever" trade off. Sometimes we are in control of the *need* for it, and sometimes we are not.
My concern is equally shared between development debugging phases, and code validation and qualification phases. The ability to break before a function, load the test-case passed parameters into the appropriate data-stores, and then execute the code unit (function), and finally extract the results is a typical [automated] validation process. This allows the test benches to stress the ranges and conditions of the unit under test and extract the output for post-analysis.
When the break-points and data-stores are not cleanly delineated, the the unit-level testing (or "debug") becomes a bit hectic... and possibly impossible. Even the test-benches need qualification, so having to go through hoops in order to ensure a valid test takes not only time but also extensive and highly tailored documentation; not to mention the explanation to those who are in QA and might not be as savvy to the complexities of the system.
Being able to use a compiler optimization switch to enable/disable such delineation would ease some of these issues. (e.g. Keil's optimization level is selectable, but I'm sure most of the more serious users would prefer the pick-n-choose which types of optimizations take place and not just "Level-9 and all of the levels before it." But we also realize the interdependencies and the overall difficulty in picking and choosing the types of optimizations). Having the OGC do its thing with the caveat that unique entry/exit nodes are to be kept in order to allow break-points would be a good start.
I like the OCG method, but I am concerned over the sacrifice needed in the ease of development in order to eek out a few percentage points of margin on a design.
My personal opinion is that the OCG method will eventually become the new standard for embedded system compilers, and the leading products will have paid a special effort to address the debug, development, and validation issues.
Erik, I've been following the various threads within this thread with some interest. I wasn't entirely sure what you meant by "uniquely breakpointable" at first, but having done some homework, I now believe that what you are referring to is the problem with some compilers that setting a breakpoint on one line of code may cause the debugger to stop at that breakpoint even though the execution sequence is within a different function or at least section of code.
I am a Keil user, have been for many years and still believe that it is the best tool for the '51 developments that I have been involved it. so do I. But is there anything wrong with wanting 'the best' better. I have no problems whatsoever with the optimizations provided by Keil (they are optional and I do not use them) I just want another level where some are implemented and the "debuggability killers" are not.
Someone above talked about "debugging by printf". That will work in many cases, but in "my world" where everything hangs on nanoseconds, inserting a time consuming call can make the "debug aid" a bug.
Going back to dedicated TTL circuit debugging really does not help to emphasise the argument. And yes, I have done my share of TTL circuit debugging, so I feel I know enought to say - It does not have a great deal of similarity to compiler optimizations. Well, if you reread the post you will see that the above was in support of YOUR argumnent that thinking is required. That thinking is required, of ourse has nothing to do with optimization, but what we are discussing is debugging and the hurdles the 'threading' optimization put to that
Per,
I too have recently used the Microchip C compiler.
The Keil C51 compiler certainly has it's quirks; but compared to the Microchip "nice" compiler and it's quirks, my opinion is that the Keil is more predictable in it's output and therefore preferable.
Now I have had the opportunity to migrate across to the Keil ARM compiler. One comment from me with regards to the compiler - Absolutely lovely!
I don't think anyone is contradicting you. Erik is also using the Keil compiler. Most probably because he wants to, and not because he is forced to.
I have currently done a bit of Pic programming, using the Microchip compiler since Keil doesn't support that architecture. The "nice" compiler doesn't even support the full grammar... And at least one construct where it failed to produce a binary - currently unknown if the error is in the compiler or the linker.
We developers do like good tools - it's just that our definition of good (and what kind of tools we prefer) varies with previous experiences and with type of products we develop. A one-man project has different requirements than a 100-man project. A life-sustaining equipment has different requirements than a set-top box. The people developing the set-top box would never manage to finish their product if they had to guarantee the reliability according to the standards needed of a pace-maker.
Because of the different needs, debates about A _or_ B often results in heated debates that doesn't lead to one or he other side switching opinion and most of the time without making any readers switch position either. Debates that focus on comparing two alternatives have a lot better chance to make people listen and try the alternative, since the debate isn't about right or wrong but about how one tool can complement another, or (if the discussion isn't between combatants) leads to information how users of tool A (who can't afford tool B) can find reasonable workarounds to still be productive.
View all questions in Keil forum