We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Let me tell you a story about a guy named Jed...
A long long time ago (pre-ANSI C), in a galaxy far far away I had worked for a company that had to develop internal "C" coding standards and "Jed" worked on one aspect of the standard while I worked on another. We would hold weekly meetings to reconcile our differences. In attendance, we had other professionals for simple sanity checking and to gain insights from different points of view.
Chris was one of our attendees and was a very experienced software veteran who had plenty of code in various satellite systems orbiting our planet today. By then, Chris was in upper management and graced us with his wisdom when he could.
Well during one of our weekly meetings, "Jed" and I got into a simple disagreement on a Rule about header files. We were at an impasse, so we waited for Chris to arrive and have him make the final decision: about five of us professional engineers were in the room.
When Chris arrived, he heard the arguments, and quickly announced that I was right. (Hence, Jed was wrong).
Well, Jed freaked out and wanted to take the guy outside and teach him a lesson! ... Jed was red-faced, quickly stood up, even took a step towards Chris, and said "Chris, lets just step outside and settle this! I am right and you don't know what you're talking about!" etc etc.
The other attendees and I were duly impressed over Jed's technique of handling technical disagreements. Especially with upper management.
Instead of Jed trying to learn that he *might* be wrong, Jed leaped into the confrontation method of getting his way. Bullies do this because they lack the brain-power to reason through a disagreement. It is a childish trait.
Children are at a huge disadvantage when arguing with "an adult" (or somebody who is much smarter than they are) and they will become very frustrated over their strong desire to assert themselves and their inability to win the mental sparring. They will get physical and/or verbally abusive. Some people out grow this, and some don't.
I think Jed showed his 'abilities' quite well. I find that this is true with so many people on so many subjects. I've seen this behavior many times over. I've seen it here on this forum.
When an "Original Poster", asks a question and people try to answer it (after much refinement of the OP's question) you get these side-bar posts where somebody will start attacking another poster's efforts. And I mean 'attack' and not augment or refine.
I don't have a problem with correcting or clarifying others, or even the occasional sprinkling of sarcasm, but when it is ALWAYS devolves into some vindictive vitriol between a brisling poster and the rest of 'us,' I wonder if it is out of ignorance, malice, or some twisted form of self-entertainment. All three of which are adolescent behaviors. (en.wikipedia.org/.../Adolescence)
Since the regular players here are detail oriented and thus they are savvy enough to know who I'm talking about, I don't think I have to name names.
He is critical enough to figure it out himself, so I would expect that the offender would read this and ask himself if he is demonstrating Ignorance, Malice, Entertainment, or is he being an adult and providing a constructive post before he does so.
And, I hope his "Mea Clupea" (en.wikipedia.org/.../Mea_culpa) will be a silent one, because I'm kind of tired of reading his Hostile Postings (HP).
</rant> --Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
But at least I have a computer - you can't catch me at a late stage of my application process by noting that my triplicates are not enough. Life must have been a real *** when you had to fight with huge sets of carbon paper duplicates ;)
Somewhere in the process there might be a bean counter that I can swing over on my side by claiming 'saved money', 'unchanged code run on multiple hardware platforms (possibly the next that hasn't had the money granted yet)', 'faster time-to-market by testing substantial parts of the code on off-the-shelf hw'.
If I can't produce selling (and hopefully believable) arguments, then I can't have spent enough time pondering the need/advantages of an inline_<target>.h file, and should obviously have my application rejected. This world has already seen enough code produced "just because", with no real thought behind.
When we know that well-designed software regularly fail, it's a wonder that we survive all the crappy code out there. Space avionics may seem like the ultimate challenge but a single little bad line of code in an ABS system may quickly destroy an otherwise perfect day. Or my mobile phone, that managed to move a meating to an earlier date, but kept the alarm reminder...
meating Not critisizing you, just found it funny :)
Erik
"That I am much more concerned with "Keil '51 C" than the utterly stupid concept of insisting on "Real C" on a '51 is another thing."
That is a bit dangerous. Keil has made some deviations (because of puny stack and no real 16-bit support in the chip) and some extensions (because of need for different processor instructions to address different blocks of memory and because of bit variables etc).
But besides that, Keil is a C compiler, which means that Keil to the best of their knowledge tries to follow the C standard. If you have grown used to how it works, and a bug is found where the compiler unintentionally deviates from the standard, then it is likely that a new version of the compiler will correct this bug.
If you assumed that the previous version of the compiler was "the reference" and continues to write software based on how you learned that the Keil compiler worked, then your new and old code can suddenly fail.
If you knew the language standard by heart, you would not have been caught off-guard, because you would be able to notice if the compiler did deviate from the standard in a way that Keil hasn't explicitly documented. Before coding for this behaviour, you would be able to contact Keil and ask them why the compiler does something unexpected.
Right now, you think that your C51 is wonderful. But if there is suddenly a need to create a product with a different processor architecture, your knowledge about the specific Keil C51 deviations would be worth zero. If you are not aware about exactly how the standard requires signed and unsigned variables to be treated, you may create the nastiest of bugs if/when you have to develop for a general-purpose 16-bit or 32-bit chip.
When mixing signed and unsigned variables and working with sub-int variable types, it is absolutely vital to know the difference between what the compiler does and what the standard required.
No, luckily for me I noticed the missing alarm, but even if I would have missed the meeting I think I would have survived with just a brief comment from the CEO about the value of meeting discipline. Threatening with "meating" me would probably not help me meat my deadlines ;)
But if there is suddenly a need to create a product with a different processor architecture, your knowledge about the specific Keil C51 deviations would be worth zero. different tools, different rules. the point is that if Keil had a 'because' instead of a 'for' (deliberately chosen ridiculous difference) then when you work with Keil, you use 'because' whatever the standard states. If you use a different compiler, you work with the rules for that compiler. all the wunnerful stuff about "the C standard" is fine and dandy, the point is that what you should work with is how your tool behaves, not with some 'standard' that might and might not apply.
An I saying that knowing the standard is worthless, by no means, just that whining about nonstandard 'features' or 'differences' implemented to match the processor you happen to use is ridiculous. Also, ignoring an extermely useful feature (e.g. Keil '51 BIT) for the sake of "the standard" is going around designing the project the wrong way.
PS I have worked (succesfully) with, at least, 5 different compilers and, at least 4 different processor architectures, and, in all cases, applied the "it is better to work with your tool than some 'standard' that might and might not apply.
Life must have been a real *** when you had to fight with huge sets of carbon paper duplicates ;)
The real problem was that this "Jed" guy wrote such horrendous code that I had refused to work with him until we had standards. Management agreed, and in an effort to corral him until the project was done, we implemented the draconian policies. As soon as the project was done, "Jed" no longer reported to work... for some reason. (He made it past his 'outburst' by a few months)
This world has already seen enough code produced "just because", with no real thought behind.
Human Safety Factors are always paramount, and once you have gone down that road (either from military, aerospace, medical, automotive, or industrial equipment projects), those habits might help you when you code-up that "My Little Notes" iPhone application. Hopefully that code won't lose the medical records that customers like to carry around with them to their trip to Mozambique ... and hence, that non-Human Safety Factored project became life-saving app. Or not.
And yes, we are now way off topic. We must think of Dave Sudolcan and his Keil Thread Worshipers. They might become irritated and become a whole school of smoked sardines. And we all know how those sprats like to communicate:
www.newscientist.com/article.ns
--Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
Erik,
Don't you think that if tool vendors would have had this slogan ticking in their head...
the point is that what you should work with is how your tool behaves, not with some 'standard' that might and might not apply.
our industry would have never reach ANY coding standards whatsoever?
our industry would have never reach ANY coding standards whatsoever? of course, any tool should adhere to existing stndards as far as possible/practical but here are a couple of examples of what I mean: For the benefit of "PC programmers" Keil '51 can do malloc() ARGH, have you seen how it works in the little resource starved '51? To make use of the unique facilities of the '51 Keil has introduced the BIT variable type.
fully adhering to the standard you would use malloc() and never use BIT which would be plain unadulterated stupidity.
For the benefit of "PC programmers" Keil '51 can do malloc() ARGH, have you seen how it works in the little resource starved '51?
not for the C51, but for the C166. It looked like world war three!
Erik: Once, many years ago, I worked with C. Or C++. K&R designed something called C. Bjarne Stroustrup played with something called C++. There was even the old cfront program that converted C++ to C, just since many architectures had good (or at least existing) C compilers.
My language reference was to a large part the Borland Turbo C/Turbo C++ manuals, or whatever information I could bring up about the early GNU C compiler (run on a Sun).
I didn't mind that Turbo C and gcc differed a bit. I could live with that, and whenever code for some strange reason failed to compile, or did produce unexpected results, I could figure out why or how to get around it.
Writing code for segmented 16-bit processors or a 32-bit linear address spaces wasn't so much different, after I realized that a 'bus error' meant 'me bad' (for people who don't like latin terms :p ).
ANSI C felt like some dusty work by some abstract people I didn't much cared about. As long as I could learn how to get code through both Turbo C and gcc and get them to print similar results, I was happy.
I had mostly left C and where busy with C++ (or working with old C compilers) when ISO/IEC 9899 was adopted. But by that time, a lot had happened with C++. We got exceptions, RTTI etc. Initially, I felt that the changed scope rule for "for" statements was the most compatibility-breaking change. And all compilers where in a big flux either trying to catch up, or already having implemented most of the features based on preliminary suggestions (possibly incompatible with the finally accepted definitions). It really became important to look into the ISO C++ standard, since it's the official map for all the compiler vendors.
After starting to read the standards, it suddenly became clear that the standards are not strange beasts living their own lifes. The used language is quite clear. And they have a level of detail that the compiler vendor manuals are not even close to reach, and couldn't/shouln't try to match.
Just as the datasheets for hw components, they represent the ultimate description about the tools we are using. You not only get information about the "what", but from the text you can deduce _why_ the standard requirements looks as they do.
When you see a hw question here, you may think that the OP is a fool for not being able to pick up the answer from the datasheet within seconds to minutes. But with experience, you have learned to master datasheets and know what to look for, and where to look. But have you spent the same time getting comfortable with the _real_ datasheet for your compiler?
The standards do know about architectural differences. They do know that C and C++ compilers may need extra data types or extra attributes (such as xdata) added to variable declarations.
So how do you know if your compiler vendor is well-versed in the standard? Hint: If your code suddenly breaks or you get release notes claiming that such a compiler-specific attribute suddenly binds in a different way in comparison to a previous version of the compiler.
Turbo C had far, near and huge pointers. That was specific to the x86 architecture, but not much have changed. If the compiler vendor knows the standard, they can add an xdata attribute and you will know if you should write the word first on the line, or before the star, or after the star or after the variable name. The far, near and huge attributes has survived the x86 era. Some compilers may have far, some compilers may have _far, and some may have __far. But the declaration still looks the same.
To you, xdata is a violation of the C standard, and a reason to ignore the standard. To me, the standard has already told Keil how to implement the xdata extension in a way that I will understand if it binds to the left or to the right.
(to be continued)
(continued)
The standard is not about forcing a compiler vendor into producing carbon-copy products, all being "exactly" identical and totally limited by hard rules. It is about making sure that all common parts of the different compilers behaves exactly as you expect they should. And the standard makes sure that extensions are added in a way that makes the extensions logical super-sets of the language.
Do get the C standard. It isn't expensive. Whenever you see a thread discussing syntax problems - pick up the standard and try to find the relevant sections. If you do, then you will notice that the answers are clearly writen and easy to find.
And that any such thread could actually be summed up as a Please read the manual, just as questions about "how do I initialize my watchdog?". If Please read the manual is a good answer is a separate issue, but the reason people ask questions is because they haven't read the correct documentation. If it us because they don't know what to read, doesn't understand the language or are lazy is a separate issue. But for anyone to be able to answer (knowing the answer and not just assuming they know the answer), some people really must have spent the time reading the ultimate datasheet.
A huge number of questions to this forum is because people haven't spent time with the documentation. But whenever people gets links to the Keil documentation, it is important to note that the Keil documentation isn't complete. It is just an addendum to read as follow-up to the ISO C standard.
When I initially coded C, everything was obvious. A + B could only mean one thing. But that isn't true. If you look at the standard, it spends a lot of paragraphs on explaining what the compiler must do to make sure that you don't get surprised when you try A + B. The standard worries about A being signed and B being unsigned, or A and B having different sizes. And it worries about the case when you assign the answer to a variable of different size. There are so many small details needed to get C to look obvious and generate "obvious" results.
Whenever the compiler vendor miss-reads the standard, you as the end user will get a big nose-bleed. If you know the standard, you know where the compiler vendor has erred. If you don't know the standard, you may assume that you did something wrong.
All changes Keil has made to their C51 compiler are so very tiny compared to the language standard, that it is more or less a non-issue. If you throw away the actual declaration and just look at the code using the sbin data type, it will look like standard C. It will almost fully behave as standard C. It will almost be portable. That the data type is sbit doesn't mean much for portability since the real portability issue isn't the sbit type but how another processor controls a port pin.
Following the standard isn't a question about allowing sbit or not. The most important part when it comes to the standard is if sbit is signed or not, and what happens if you try to assign -1, 0, 1 or 2 to it, i.e. if the behaviour follows the behaviour of the rest of the language, just as it is important to know what result you get (or if it will be undefined) if you assign a value from an int or unsigned int into a signed or unsigned char.
It isn't really the C standard that controls how portable a program is. It is more a question about how you access the actual hardware, or how you handle your variables. A program can be highly portable, while being written for a specific architecture if 95% of the source is generic and located in one set of files, and the last 5% is target-specific and located in different files. And the code can be very efficient. Or it can be portable by using #define blocks (but I don't much like #define blocks because of readability issues).
On the other hand, you can write code where every single line of code is intentionally written for the target architecture. But the code can still be very inefficient.
Portable and efficient are not mutually exclusive. In some cases you can have both. In some cases you can't.
For me, portable is to a large part how easy you can move the code to a different target, and how easy can you make sure that the ported code will produce the same results.
Your design controls how easy you can adapt the code for a different target. Your understanding of the C/C++ standards and "standard" portability issues such as size of an int, or byte order, affects how hard it will be to get the expected results out of the ported code.
Your projects may be so special that you have made the decision that you don't have to spend any time on making the design portable, and instead spend any design time into managing to get the hw to dance for you. But you should still spend time on the language side of portability, since that very much defines how portable _you_ are. Can _you_ be moved to a different architecture, and after reading the datasheets for the new processor start to produce working code? And will the code work because _you know_ that it will work, or because you run it an saw the result you hoped to see?
Let's take two programs that perform the same job exactly as per the project requirements. The one that uses malloc() is more readable, easier and quicker to write, less prone to suffering subtle bugs and easier to maintain.
Which is better?
The standard doesn't require you to use malloc(). It doesn't even require malloc() to be implemented. Oh, and it doesn't prevent you from using 'bit'.
Per, Excellent post. Well said, well thought-out, and well executed.
Your projects may be so special that you have made the decision that you don't have to spend any time on making the design portable, and instead spend any design time into managing to get the hw to dance for you. But you should still spend time on the language side of portability, since that very much defines how portable _you_ are. Can _you_ be moved to a different architecture, and after reading the datasheets for the new processor start to produce working code?
Per, anything will be more or less portable. I have reused code across platforms and will state as I have before "to port non-portable code is less effort than making the original code portable", not the least because at the time you write the original code, you usually do not know what it might be ported to some day. As an example if you write some code in C51 should you make the effort of making it portable to SDCC.
And will the code work because _you know_ that it will work, or because you run it an saw the result you hoped to see? as far as I am concerned if it works for any reason except "the code work because _I know_ that it will work" it does not work whatever the result of a 'test' might show. The internet is flooded wirh 'working' code that only works under the exact same circumstances as those of the original 'developer'.
PS vocabulary portable: code that without any change will compile and work when compiled by C51 SDCC and GNU non-portable: code that require nominal changes tto work when compiled by C51 SDCC or GNU
On this whole portability thing, I'm glad erik provided the glossary of terms, because when erik points out that portable code is rare, he is right in his definition of terms. I don't think ANY embedded professional would expect to write in pure "C". It would be simply smashing if they do.
All processor variants and their cohorts in the IDE business will have special options for squeezing every last performance drop out of the final executable code.
Be it speed or space, the need to optimize your code is driven by us, the customer. Imagine a pure "C" compiler (no deviations from the standard at all) for the 8051. To eek core performance out of it, we would complain about the gyrations we would need to go through to achieve 'assembly level' performance, and if they (Keil) would only provided a "C" variant extension called 'bit' or 'xdata' or 'data' like their competitor does, we'd be happy. We would be able to eek that performance out and be happy.
So they do. And so does their competition, etc. But as Per points out, at least these deviations are only a stones throw away from the main road/path that The Standard has carved out. And of course going from one processor to the next are still only a stones throw way from each other on the "C" level. Without knowledge of The Standard, how can any embedded professional determine if that stone was thrown, or shot out of a Howitzer?
I prefer to know my tools well, but at the same time, I don't want to NEED to know the tool in order to complete a task or project. If that Howitzer deviation must be taken, then I get a bit aggravated. Especially when I want my code to last years, and not be boxed into a corner due to a highly specific compiler path that cannot be logically reworked for a new or different compiler or even a new or different processor.
I consider it portable if there is 'minimal' impact on the code and 'minimal' impact on the coder to go from one platform to another. Centralizing the non-portables or high-risk code helps, and trying to write the code closer to the 'pure' C environment aids in making code portable.
So when we speak of 'portable' or 'nominal' or 'minimal' they are all subjective concepts, and we can go on-and-on-and-on refining what 'portable' means (and the derivative discussion on how to write it), but I doubt if there is going to be a clear-cut answer. To presume to hold The Answer is clearly spratter-brained.
So when we speak of 'portable' or 'nominal' or 'minimal' they are all subjective concepts, and we can go on-and-on-and-on refining what 'portable' means (and the derivative discussion on how to write it), but I doubt if there is going to be a clear-cut answer. To presume to hold The Answer is clearly spratter-brained. "ALL code is portable, the only difference is the amount of work involved in porting it"
so, Vince, we agree.