There is a new 8051 C compiler in beta test that beats Keil's compiler by 35% code size, and 14% speed, on the Dhrystone benchmark. And, there is no need to select a memory model or use special keywords to control data placement.
More details here: www.htsoft.com/.../silabs8051beta
No new product should require optimum use of the hardware - unless it is a simple product built in large volumes, and it is known that the processor family has variants with larger flash and/or higher CPU frequency so that it is possible to just order a batch of bigger/faster processors to release a premium edition.
But since the lead times for new chips can be quite high, it is dangerous to use a fully slimmed solution even if a bigger/better chip is available. In case a little "gotcha" is found really late in the release cycle, the lead time to get the better chips may result in several months delayed release, probably costing very much more than the price difference between the two chip models (ignoring the goodwill loss from unhappy customers/resellers).
Another thing to think about is to have some extra speed/code space for instrumenting the code with some defensive bug catchers or field-debug abilities. It is really, really nice to have a bit of extra code that constantly checks important variables for invalid states, out-of-bounds parameters etc and potentially emits a printout, calls a catch-all breakpoint function, ... and then either resets the unit or repairs the parameter to a valid value.
I must admit that I'm not too fond of ICE debugging. I prefer to read code, and possibly add an extra printout to verify an assumption. But it really, really helps if the final product can keep some of the asserts etc. If nothing else, it means that I can get the same timing when debugging and when making a release.
ICE debugging also often requires extra connectors that are not mounted for the mass-produced units, and if visiting a customer on the field, it might sometimes be nice to be able to have a peek inside an installed unit - possibly connected to misbehaving external equipment that isn't available at my office. Black-box testing will only verify test cases I have managed to figure out - but the real world has a tendancy to create a number of semi-impossible situations too.
Having the ability to use spare interface bandwidth to dump memory contents (using CAN, UART, SPI, ...) from a live installation can be a real life-saver, and in some situations it might not even be really safe to connect an ICE. I don't know about too many opto-isolated ICE tools.
An ICE is a valuable tool, but it is one of many, and I prefer - not always a choice I'm in control off :) - to catch my bugs before I need to pick up my ICE. Some people may prefer to pick up the ICE earlier, some later, but it may also be a question about what type of software that is developed. A traditional problem for debuggers is to stop at a breakpoint in a multi-threaded application. What happens with the other threads if they are expected to serve hardware in real-time, and a producer or consumer suddenly doesn't produce/consume data at the speed required by the hardware? An ICE can't stop the PC from sending serial data into the target, and the hardware FIFO can only swallow so many characters before overflowing... With a bit of overcapacity in code size and speed, I may get my information without the breakpoint and without breaking the real-time requirements.
I totally agree!
And, you rightly pointed out the High-Volume 'problem' in optimal hardware design. Making the trade between a $9 CPU and a $12 CPU multiplied by 300,000 units becomes a $900,000 decision. But the 10,000 added service calls you will get also costs money. That is what the managers get paid to decide (BTW I've been there).
Having a debug [kernel] in the code (provided it is allowed) can greatly reduce the field problems, and as you've stated, it is amazing how many "semi-impossible situations" exist. The most cost effective weapon against this is "defensive programming." Defensive programming adds time and code, so the new design must account for that.
The debug capability when "visiting a customer on the field" can be vital for both the customer and your company. Most of the embedded products do are in the realm of low to medium quantity production. The ability to quickly turn one of your "quality based products" with a field error into the operational state can offset the black-eye you got in the first place when it failed. Without the tools to quickly diagnose and repair the problem, that black-eye can cost you not only money but your "quality" reputation.
Mr. Westermark is spot-on when it comes to being capable of quick (and easy) fixes to a system by designing in "overcapacity."
An ICE is preferable over a simulator, but is only NEEDED when you have a serious hardware/software error. The High Volume (>500,000 units) company I worked for did not have an ICE. At first I thought that was funny, but with competent engineers, it is rarely needed... actually we did have an ICE that was super old and lame so it collected dust. It was used once in three years I was there, and it was re-confirmed as "useless". In some cases, you simply cannot find the problem without an ICE. ...Bond-outs!
The over-reliance upon an ICE may inhibit the developer's clarity of thought. Kind of like the Flash versus UV-EPROM code memories did/do/does. When you had only five or ten EPROMS erased, and each erase cycle took four+ hours, you didn't willie-nillie try this-n-that to see if it this modification now works. You had to THINK about it first.
With that in mind, the developer who over-relies upon an ICE might not realize the importance of having the debug capabilities that Per Westermark is advocating. By thinking ICE-less for awhile, you'll learn what kinds of field testing demands there are out there, and how to code-up some really worth-while debugging techniques/services. Like Per said, having an ICE is not always a choice. Using your brain is... keep it sharpened.
--Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
An awful lot of blurb produced by simple mickey-take!
I don't feel offended that Erik would be preferred at the professional level. He obviously knows how to use an ICE and (apparently) I do not??!!
In general, the comments of Per and Vince are ones I would agree with.
Anyway, WTF - I am a Keil user, have been for many years and still believe that it is the best tool for the '51 developments that I have been involved it.
I don't think anyone is contradicting you. Erik is also using the Keil compiler. Most probably because he wants to, and not because he is forced to.
I have currently done a bit of Pic programming, using the Microchip compiler since Keil doesn't support that architecture. The "nice" compiler doesn't even support the full grammar... And at least one construct where it failed to produce a binary - currently unknown if the error is in the compiler or the linker.
We developers do like good tools - it's just that our definition of good (and what kind of tools we prefer) varies with previous experiences and with type of products we develop. A one-man project has different requirements than a 100-man project. A life-sustaining equipment has different requirements than a set-top box. The people developing the set-top box would never manage to finish their product if they had to guarantee the reliability according to the standards needed of a pace-maker.
Because of the different needs, debates about A _or_ B often results in heated debates that doesn't lead to one or he other side switching opinion and most of the time without making any readers switch position either. Debates that focus on comparing two alternatives have a lot better chance to make people listen and try the alternative, since the debate isn't about right or wrong but about how one tool can complement another, or (if the discussion isn't between combatants) leads to information how users of tool A (who can't afford tool B) can find reasonable workarounds to still be productive.
Per,
I too have recently used the Microchip C compiler.
The Keil C51 compiler certainly has it's quirks; but compared to the Microchip "nice" compiler and it's quirks, my opinion is that the Keil is more predictable in it's output and therefore preferable.
Now I have had the opportunity to migrate across to the Keil ARM compiler. One comment from me with regards to the compiler - Absolutely lovely!
I am a Keil user, have been for many years and still believe that it is the best tool for the '51 developments that I have been involved it. so do I. But is there anything wrong with wanting 'the best' better. I have no problems whatsoever with the optimizations provided by Keil (they are optional and I do not use them) I just want another level where some are implemented and the "debuggability killers" are not.
Someone above talked about "debugging by printf". That will work in many cases, but in "my world" where everything hangs on nanoseconds, inserting a time consuming call can make the "debug aid" a bug.
Erik