There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.
More details here: www.htsoft.com/.../silabs8051beta
1) No reference to "best choice" of optimizing flags for the two compilers.
2) Too small application to show difference in code size - remember that size of RTL affects small projects more.
3) How much of the code optimization results in speed changes for other applications? The Dhrystone isn't exactly relevant for an 8-bit microcontroller with 1-bit instructions...
They really have to produce more information before making any claims in one direction or the other. Compiling an application that makes use of a lot of one-bit variables and compare between two compilers, and then compila a program using a lot of 16-bit orh 32-bit variables and you will see that the comparisons will vary a lot. Code size and speed can only be deduced from a significantly large code base of very varying - but applicable - code.
I am sure, anyone with enough experience can make a compiler that will make faster and more compact code. who gives a hoot if the result is not 'uniquely breakpointable'. Th result from Keil can be much better if you use a higher optimization level, buy who in his/her right mind would use code the emulator can not 'uniquely' breakpoin on. Sorry, I have now offended a lot of people, but debuggability is far more important than those last few percent efficiency. What really offend me is that nobody (yet) has made a compiler/linker/optimizer that fully maintain program flow and is optimized in all other respects.
Erik
PS Clyde, do you really think it is apprpiate to promote your stuff on a website run by a competitor?
"People who don't know better (and you might have to debug their code at some point), people who don't care and people who are actively malicious."
I take your point on that one. I have come across similar dubious practice code in legacy projects.
Not so long ago I was scanning over some code of a (supposedly senior) team member. There was a block of believable code, in a released project, that had a comment just above it stating:
/* THIS CODE DOES NOT WORK */
Not too surprisingly, the team member wasn't part of my team for much longer!
Well, apart from anything else, I pasted the code fragment into a C file and compiled it with a few of the variety of compilers I have on hand. All produced code that delivered the expected result.
Ah, the forum made it look as though you were replying to Per Westermark's post, hence my question. It's a good idea to always quote a bit of the post you're replying to to avoid confusion.
Not too surprisingly, the team member wasn't part of my team for much longer!<p>
Well, the question is: If the code (obviously) didn't work, why wasn't this caught during testing ? Or was the comment outdated and the code correct ?
No, not simple. Besides assuming that you do manage to teach them all to behave, you also assume that you really are in control of all paths of source code onto your table.
Did you see my example? The library in question wasn't written inhouse, but because of policy reasons (sellers like partnerships, since it looks so nice on the web page...) you sometime have to integrate code you have suddenly got in your knee.
Sometimes management buy products that you may have to take care of. Sometimes your products needs to be integrated with a customers product. Sometimes, someone decides to buy a magic library that will greatly decrease the development time of a new feature. Many ways to get new and strange code inside the house. Not all written by really competent developers.
By your implication, I was incompetent for assuming that the documented "function" actually was a function.
Not at all.
Sumething documented as a function should really behave as a function, don't you think?
Absolutely.
Life is a lot easier when you have written every single line of the code - as soon as someone else have been involved, you have to assume that they have followed the traditional best-practices or you will never manage to get a final product.
Indeed.
What I was after an example of was something to illustrate the premise I was querying, which was:
You appear to be saying that given an error in a 'C' program that is caused by:
a) Faulty logic or b) Faulty implementation of correct logic
you might find yourself debugging at assembly level to spot the error?
To that end I asked for:
I would be interested to see a concrete example of an error that a competent 'C' programmer might make that would not be more easily spotted by reviewing the 'C' code rather than stepping through compiler generated assembly code.
In other words, an algorithmic or logical error rather than one introduced by someone else's mistake.
I'm interested because if there are situations like this, I certainly haven't come across them. If I find a bug when testing I know that the chances that the problem is in my code are high, so I check my code. I can't imagine why I might find it easier in this situation to look at compiler generated assembly rather than the source code I actually wrote.
A lot of debuggers allows you to watch _both_ your C code and assembler, so it does not represent a big disadvantage to look at the assembler.
In my case, it showed where I was guilty of an incorrect assumption. I assumed that the library was written by a competent developer.
But there could also be a situation where I am blind to my own errors, because parts of my brain have already decided that a specific piece of code _must_ be correct. Seeing both assembly + C could then kick my brain out of its incorrect track and have it starting to see what is really there, instead what it assumes is there.
Have you ever looked at a table for your keys, and failed to see them just because your mind have already decided that they can't be there, or that they have to be on the right side of the desk, or that the bright red key ring that is sticking out under a paper just can't be your keys since you know that you haven't touched that paper since the last time you had your keys?
Our brain is a marvel at pattern matching, which is the reason it is hopeless to try to write an application with any real intelligence. But an engine with too good pattern matching has a tendancy to sometimes find patterns where no patterns exists.
I'm a lot beter to stay focused when looking at really advanced algorithms. Most overlooked errors are likely to be in the trival parts of the code - or maybe the debug printout that is left inside the algorithm. If you see 50 lines of non-trivial code, and three debug printouts, you are likely to skip over the debug lines and put all your focus on the "real" code. Such irrational - but not too uncommon - decisions can easily make you miss that little = instead of == in one of the printouts. Or maybe someone have been "optimizing" a bit and added a ++ in the printout, since that saves a line of code - until I come along and decides that the prinouts should be conditionally included...
In the old days, we needed the assembler output since we couldn't trust the compilers. Todays compilers are so reliable that we can limit ourselves to take a peek at the code output for extremely time-critical code, look at compiler output to learn how to use the assembler instructions of a new processor, or now and then just to get our brains to switch track and start to process data again, instead of living on old assumptions.
Well, any case of lawyer code (e.g. use of code with effects not specified by the C language standard) would suffice there.
A competent 'C' programmer wouldn't do that.
What this boils down to for me is this:
If you find yourself reaching for the ICE or stepping through compiler output on a regular basis you are either working with 3rd party junk rather than decent development tools or libraries, or the code you have written is junk. The 'have a go' programmers who 'don't care a hoot' about the standard find themselves unable to get anything to work without constant debugging which they are incapable of doing at source level. Why? Because they cannot tell whether the code they have written *should* work or not. They find out how it *actually* works by experimenting with the compiler, rather than just reading the damn manual.
This is why the world is full of unreliable, unmaintainable junk.
Mr. Sprat, those logic/implementation errors are found in the "C" construct in most of the occurrences... not necessarily most of the 'time.'
You can find ten of those errors in ten minutes but the one that which takes two hours is the type I'm talking about. Hence, most of your 'time' is spent debugging something that is not glaringly obvious that the faulty "C" implementation of faulty logic is in error. Usually the problems that take the most time are those that are a result of 'undocumented features' in the data-sheets, somebody else's code, or self generated. And for those long duration bugs, you'll need to delve into the assembly.
Your request for a logic/implementation flaw that is more easily found at the assembly level was fulfilled by Mr. Westermark's #define type errors. Yes it is possible to deduce it from within the "C" platform, but a quicker approach is to validate the post-processor result of the #define at the assembly level. It was a valid example of your request.
But an example of a 'bug' that is more easily found in the hand tracing level of assembly code is this one (yes, it is an 'undocumented feature' of Keil' optimization settings, but it could be argued that I created it myself too)...
extern u16 Get_ADC_Level( void ); #define MIN_VOLTAGE (2458) // 2457.6 = ((FULL_SCALE)/2)+(FULL_SCALE*0.10)) void Intruder_isr( void ) interrupt 0 using 2 { u16 val; do { Charge_Pump( ); // should take 10mS/kV val = Get_ADC_Vector( ); // lsb = 0.043 kV } while( val < MIN_VOLTAGE ); // removed time-out and hi-rel code // for 'example' clarity }
The "Get_ADC_Vector( )" function is external and in another module--as you would expect. The compiler (Keil) compiles and links with zero errors/warnings.
By having "Global Register Coloring" enabled in Keil's "Code Optimization" options, the Keil compiler did/does not account for the register bank switch from the "using 2" directive, so the parameter passed back from the function is in error: the compiler generates code that accessed the registers absolutely and tries to place them into the function-return registers.
Keil should have either identified that you must use "Don't use absolute register access" when you use "Global Register Coloring" and you are using the "using" directive in your code, -OR- Keil's optimizer should have properly handled it.
This example of an assembly traced bug didn't take too long because I realized that the 'using' directive does modify the reg-bank and I knew it was a risk point.
But initially I, like any typical user, was relying on Keil to handle that deviation properly: especially since the pre-optimization proved valid. (An OCG would 'see' this cross module error and avoid it)
FYI: My cure was to eliminate the "using" directive since it was clearly not a need. This is because I take the time/overhead to call two functions, from within the ISR, I could obviously afford the time delays of not switching banks. I also un-checked the "Global Register Coloring" since Keil proved that you cannot trust it. Keil's own documentation does not clarify the conflict.
Sprat, the real point I was making is that most of a "competent" embedded engineer's TIME is spent in dealing with "bugs" at the assembly level. The bugs that are cured at the "C" level are the easy "Doh! what a stupid mistake" type while the gotcha's are not as easily found and do require assembly level tracing.
Hopefully the underlying assembly code is not so mangled as to make it hard to trace. (such as register optimization where some needed data-store is held in R3 since the optimizer knows that it can stay there until it is needed, and doesn't have to write it out to the data space just to keep the data-element current. So then you can find the [traced] code comparing against a 'mysterous' R3 that was loaded a long time ago instead of comparing against 'Critical_Threshold' which is what you expected to see).
"find patterns where no patterns exists" == "code hallucinations"
--Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
If you find yourself reaching for the ICE or stepping through compiler output on a regular basis you are either working with 3rd party junk rather than decent development tools or libraries, or the code you have written is junk.
I think I have to agree with that. Most code either works on the first run, or reading the code is enough to see what ails it. A bit of guard code can help in case I have made an incorrect assumption about the value range of the input, or in case I'm inserting the new code in already broken code.
You can find ten of those errors in ten minutes but the one that which takes two hours is the type I'm talking about. Hence, most of your 'time' is spent debugging something that is not glaringly obvious that the faulty "C" implementation of faulty logic is in error. Usually the problems that take the most time are those that are a result of 'undocumented features' in the data-sheets, somebody else's code, or self generated. And for those long duration bugs, you'll need to delve into the assembly
The issue here is that debugging is not pantyhose (one size does NOT fit all). There are individual methods to the sequence ( look at source, check it in the ICE, do a, most likely hopeless, try with a simulator, insert some printf ...) but to find all bugs in a timely manner the most dangerous attitude is "THIS is THE way".
Yes, I do, occasionally resort to looking at the assembler, that is, indeed, a tool in my toolchest and it has, at times helped immensely. Does that make it "the right debugging method", of course not, but neither does it make it "the wrong debugging method".
I think I have to agree with that. Most code either works on the first run, or reading the code is enough to see what ails it Try, for instance to write an interface to a FTDI vinculum and debug it by the above method, you will die before the program runs.
there are beautiful debugging theories based on all information given is complete and correct just one comment male cow manure.
You are always so poetic.
...unable to get anything to work without constant debugging which they are incapable of doing at source level.
I agree. Among other sins, this approach yields code that is less maintainable and probably less portable.
Among other sins, this approach yields code that is less maintainable and probably less portable. your debugiing method has NOTHING to do with code being "less maintainable". What makes code amongst other things "less maintainable" is 'fixing' bugs instead of removing them.
Erik, Were you not, at least once in your (presumably) long career, been tempted to win a few nanos here and there by using a platform specific, nasty trick? That's what this is all about. Jack belong to the school of standards and "working by the book". you are the guy that wants to get the work done without loosing a single nanosecond. I do admire your approach, but what I have seen so far persuaded me to join the camp of people who like to hide behind the standard. Maybe it is because I live in the universe of milliseconds critical applications, not less (for now).
Were you not, at least once in your (presumably) long career, been tempted to win a few nanos here and there by using a platform specific, nasty trick? the only "platform specific tricks" I have implemented are 'tricks' specific to the particular derivative I am working with. If a given derivative as far as platform specific has e.g. multiple datapointers and the compiler does not use them, I will, in a time critical application, go straight to assembler. And there I use very platform(derivative) specific trick.
If by platformk specific you refer to the (re the '51)stupid 'portability' (who has ever heard of a 'small embedded" project being 'ported') I confess that if the Keil compiler aloow me to specify DATA I do it.
View all questions in Keil forum