Let me tell you a story about a guy named Jed...
A long long time ago (pre-ANSI C), in a galaxy far far away I had worked for a company that had to develop internal "C" coding standards and "Jed" worked on one aspect of the standard while I worked on another. We would hold weekly meetings to reconcile our differences. In attendance, we had other professionals for simple sanity checking and to gain insights from different points of view.
Chris was one of our attendees and was a very experienced software veteran who had plenty of code in various satellite systems orbiting our planet today. By then, Chris was in upper management and graced us with his wisdom when he could.
Well during one of our weekly meetings, "Jed" and I got into a simple disagreement on a Rule about header files. We were at an impasse, so we waited for Chris to arrive and have him make the final decision: about five of us professional engineers were in the room.
When Chris arrived, he heard the arguments, and quickly announced that I was right. (Hence, Jed was wrong).
Well, Jed freaked out and wanted to take the guy outside and teach him a lesson! ... Jed was red-faced, quickly stood up, even took a step towards Chris, and said "Chris, lets just step outside and settle this! I am right and you don't know what you're talking about!" etc etc.
The other attendees and I were duly impressed over Jed's technique of handling technical disagreements. Especially with upper management.
Instead of Jed trying to learn that he *might* be wrong, Jed leaped into the confrontation method of getting his way. Bullies do this because they lack the brain-power to reason through a disagreement. It is a childish trait.
Children are at a huge disadvantage when arguing with "an adult" (or somebody who is much smarter than they are) and they will become very frustrated over their strong desire to assert themselves and their inability to win the mental sparring. They will get physical and/or verbally abusive. Some people out grow this, and some don't.
I think Jed showed his 'abilities' quite well. I find that this is true with so many people on so many subjects. I've seen this behavior many times over. I've seen it here on this forum.
When an "Original Poster", asks a question and people try to answer it (after much refinement of the OP's question) you get these side-bar posts where somebody will start attacking another poster's efforts. And I mean 'attack' and not augment or refine.
I don't have a problem with correcting or clarifying others, or even the occasional sprinkling of sarcasm, but when it is ALWAYS devolves into some vindictive vitriol between a brisling poster and the rest of 'us,' I wonder if it is out of ignorance, malice, or some twisted form of self-entertainment. All three of which are adolescent behaviors. (en.wikipedia.org/.../Adolescence)
Since the regular players here are detail oriented and thus they are savvy enough to know who I'm talking about, I don't think I have to name names.
He is critical enough to figure it out himself, so I would expect that the offender would read this and ask himself if he is demonstrating Ignorance, Malice, Entertainment, or is he being an adult and providing a constructive post before he does so.
And, I hope his "Mea Clupea" (en.wikipedia.org/.../Mea_culpa) will be a silent one, because I'm kind of tired of reading his Hostile Postings (HP).
</rant> --Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
you conveniently ignore that while "keil C" has a variable named BIT, you will not find that in the "ISO (formerly ANSI) Standard 'C'"
How do you reconcile the fact that C51 is an ANSI/ISO 'C' compiler with the existence of 'bit', I wonder?
well, I asked, why do you not answer instead of 'wonder"
You didn't ask anything, you made a statement to which I responded with a question. In any case, given that you treat any suggestion that you have not read the standard as a "baseless accusation" then I'm sure you'll feel insulted if I suggest that you might not know the answer to my question.
Do you still claim to have read the ISO 'C' standard?
how can this be portable without preprocessor directives #if COMPILER == C51 ....
It isn't portable. Wrapping code in preprocessor directives doesn't make it portable - in fact, it makes it clear that it is non-portable.
NOTE: it is, of course, possible to just use the OR and ignore the efficiency, but what if the C51 project is time critical
I'd be unlikely to have designed my way into a situation where a bit operation rather than a byte operation would make the difference between project success and project failure. Perhaps you write the code, compile and count the clock cycles before you select the processor and oscillator?
and the ACME project is not because it runs on a much faster processor
Ah, glad to see you agree with me. Use a faster processor.
If the above is not "to your liking" come out of your hole and state what I suspect is your position that you do not give a hoot about efficiency.
I care about efficiency where efficiency is the most important factor. With sensible design, however, it rarely is.
In short "Wrapping code in preprocessor directives doesn't make it portable - in fact, it makes it clear that it is non-portable" is contradicted by your own statements.
Really? Which ones?
Tamir,
A bit in C51 is there to specifically use the bit storage area of the CPU. There is a block of 128 bits that are a little like prime real estate. They are particularly useful for boolean operations.
C51 has access to this area with a specific extension.
When considering porting, they are not such a big issue - And I see the usual stubborness being exhibited by a certain poster.
Generally, I would not consider using the bit directly within the code, but would instead use a typedef such as:
#if C51 typedef bit Flag; #else typedef unsigned char Flag; #endif
This is put into a header file with all other port related details.
So whats so difficult about it? Nothing!
Jack, Erik,
I don't know how Keil handle bit fields for the C51, but for an ARM it is certainly a bad idea to use them, due to the following reasons:
* Jack must agree with me that bit fields are not really a solid part of the C standard. compilers seem to have artistic freedom when dealing with them which can yield more or less efficient code (packing of structures...). * because all ARM registers are 32 bit, 2 instructions are required to test a bit: a shift to right, then a separate instruction to test the value.
it is much faster to use a 32 bit integer as a container for your bit fields.
I wonder: what is more efficient for a C51? using a bit field or an 8 bit integer?
Your projects may be so special that you have made the decision that you don't have to spend any time on making the design portable, and instead spend any design time into managing to get the hw to dance for you. But you should still spend time on the language side of portability, since that very much defines how portable _you_ are. Can _you_ be moved to a different architecture, and after reading the datasheets for the new processor start to produce working code?
Per, anything will be more or less portable. I have reused code across platforms and will state as I have before "to port non-portable code is less effort than making the original code portable", not the least because at the time you write the original code, you usually do not know what it might be ported to some day. As an example if you write some code in C51 should you make the effort of making it portable to SDCC.
And will the code work because _you know_ that it will work, or because you run it an saw the result you hoped to see? as far as I am concerned if it works for any reason except "the code work because _I know_ that it will work" it does not work whatever the result of a 'test' might show. The internet is flooded wirh 'working' code that only works under the exact same circumstances as those of the original 'developer'.
Erik
PS vocabulary portable: code that without any change will compile and work when compiled by C51 SDCC and GNU non-portable: code that require nominal changes tto work when compiled by C51 SDCC or GNU
How do you reconcile the fact that C51 is an ANSI/ISO 'C' compiler with the existence of 'bit', I wonder? well, I asked, why do you not answer instead of 'wonder"
'Wrapping code in preprocessor directives doesn't make it portable - in fact, it makes it clear that it is non-portable. how can this be portable without preprocessor directives #if COMPILER == C51 bit47 = TRUE; #elif COMPILER == ACME bitword |= 0x04; #endif NOTE: it is, of course, possible to just use the OR and ignore the efficiency, but what if the C51 project is time critical (if you even know what that is) and the ACME project is not because it runs on a much faster processor? If the above is not "to your liking" come out of your hole and state what I suspect is your position that you do not give a hoot about efficiency.
Another query if you were to be portable between Keil and SDCC how would you manage the bit definition without preprocessor directives
Per, Excellent post. Well said, well thought-out, and well executed.
--Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
For the benefit of "PC programmers" Keil '51 can do malloc() ARGH, have you seen how it works in the little resource starved '51?
Let's take two programs that perform the same job exactly as per the project requirements. The one that uses malloc() is more readable, easier and quicker to write, less prone to suffering subtle bugs and easier to maintain.
Which is better?
fully adhering to the standard you would use malloc() and never use BIT which would be plain unadulterated stupidity.
The standard doesn't require you to use malloc(). It doesn't even require malloc() to be implemented. Oh, and it doesn't prevent you from using 'bit'.
(continued)
The standard is not about forcing a compiler vendor into producing carbon-copy products, all being "exactly" identical and totally limited by hard rules. It is about making sure that all common parts of the different compilers behaves exactly as you expect they should. And the standard makes sure that extensions are added in a way that makes the extensions logical super-sets of the language.
Do get the C standard. It isn't expensive. Whenever you see a thread discussing syntax problems - pick up the standard and try to find the relevant sections. If you do, then you will notice that the answers are clearly writen and easy to find.
And that any such thread could actually be summed up as a Please read the manual, just as questions about "how do I initialize my watchdog?". If Please read the manual is a good answer is a separate issue, but the reason people ask questions is because they haven't read the correct documentation. If it us because they don't know what to read, doesn't understand the language or are lazy is a separate issue. But for anyone to be able to answer (knowing the answer and not just assuming they know the answer), some people really must have spent the time reading the ultimate datasheet.
A huge number of questions to this forum is because people haven't spent time with the documentation. But whenever people gets links to the Keil documentation, it is important to note that the Keil documentation isn't complete. It is just an addendum to read as follow-up to the ISO C standard.
When I initially coded C, everything was obvious. A + B could only mean one thing. But that isn't true. If you look at the standard, it spends a lot of paragraphs on explaining what the compiler must do to make sure that you don't get surprised when you try A + B. The standard worries about A being signed and B being unsigned, or A and B having different sizes. And it worries about the case when you assign the answer to a variable of different size. There are so many small details needed to get C to look obvious and generate "obvious" results.
Whenever the compiler vendor miss-reads the standard, you as the end user will get a big nose-bleed. If you know the standard, you know where the compiler vendor has erred. If you don't know the standard, you may assume that you did something wrong.
All changes Keil has made to their C51 compiler are so very tiny compared to the language standard, that it is more or less a non-issue. If you throw away the actual declaration and just look at the code using the sbin data type, it will look like standard C. It will almost fully behave as standard C. It will almost be portable. That the data type is sbit doesn't mean much for portability since the real portability issue isn't the sbit type but how another processor controls a port pin.
Following the standard isn't a question about allowing sbit or not. The most important part when it comes to the standard is if sbit is signed or not, and what happens if you try to assign -1, 0, 1 or 2 to it, i.e. if the behaviour follows the behaviour of the rest of the language, just as it is important to know what result you get (or if it will be undefined) if you assign a value from an int or unsigned int into a signed or unsigned char.
It isn't really the C standard that controls how portable a program is. It is more a question about how you access the actual hardware, or how you handle your variables. A program can be highly portable, while being written for a specific architecture if 95% of the source is generic and located in one set of files, and the last 5% is target-specific and located in different files. And the code can be very efficient. Or it can be portable by using #define blocks (but I don't much like #define blocks because of readability issues).
On the other hand, you can write code where every single line of code is intentionally written for the target architecture. But the code can still be very inefficient.
Portable and efficient are not mutually exclusive. In some cases you can have both. In some cases you can't.
For me, portable is to a large part how easy you can move the code to a different target, and how easy can you make sure that the ported code will produce the same results.
Your design controls how easy you can adapt the code for a different target. Your understanding of the C/C++ standards and "standard" portability issues such as size of an int, or byte order, affects how hard it will be to get the expected results out of the ported code.
Your projects may be so special that you have made the decision that you don't have to spend any time on making the design portable, and instead spend any design time into managing to get the hw to dance for you. But you should still spend time on the language side of portability, since that very much defines how portable _you_ are. Can _you_ be moved to a different architecture, and after reading the datasheets for the new processor start to produce working code? And will the code work because _you know_ that it will work, or because you run it an saw the result you hoped to see?
Erik: Once, many years ago, I worked with C. Or C++. K&R designed something called C. Bjarne Stroustrup played with something called C++. There was even the old cfront program that converted C++ to C, just since many architectures had good (or at least existing) C compilers.
My language reference was to a large part the Borland Turbo C/Turbo C++ manuals, or whatever information I could bring up about the early GNU C compiler (run on a Sun).
I didn't mind that Turbo C and gcc differed a bit. I could live with that, and whenever code for some strange reason failed to compile, or did produce unexpected results, I could figure out why or how to get around it.
Writing code for segmented 16-bit processors or a 32-bit linear address spaces wasn't so much different, after I realized that a 'bus error' meant 'me bad' (for people who don't like latin terms :p ).
ANSI C felt like some dusty work by some abstract people I didn't much cared about. As long as I could learn how to get code through both Turbo C and gcc and get them to print similar results, I was happy.
I had mostly left C and where busy with C++ (or working with old C compilers) when ISO/IEC 9899 was adopted. But by that time, a lot had happened with C++. We got exceptions, RTTI etc. Initially, I felt that the changed scope rule for "for" statements was the most compatibility-breaking change. And all compilers where in a big flux either trying to catch up, or already having implemented most of the features based on preliminary suggestions (possibly incompatible with the finally accepted definitions). It really became important to look into the ISO C++ standard, since it's the official map for all the compiler vendors.
After starting to read the standards, it suddenly became clear that the standards are not strange beasts living their own lifes. The used language is quite clear. And they have a level of detail that the compiler vendor manuals are not even close to reach, and couldn't/shouln't try to match.
Just as the datasheets for hw components, they represent the ultimate description about the tools we are using. You not only get information about the "what", but from the text you can deduce _why_ the standard requirements looks as they do.
When you see a hw question here, you may think that the OP is a fool for not being able to pick up the answer from the datasheet within seconds to minutes. But with experience, you have learned to master datasheets and know what to look for, and where to look. But have you spent the same time getting comfortable with the _real_ datasheet for your compiler?
The standards do know about architectural differences. They do know that C and C++ compilers may need extra data types or extra attributes (such as xdata) added to variable declarations.
So how do you know if your compiler vendor is well-versed in the standard? Hint: If your code suddenly breaks or you get release notes claiming that such a compiler-specific attribute suddenly binds in a different way in comparison to a previous version of the compiler.
Turbo C had far, near and huge pointers. That was specific to the x86 architecture, but not much have changed. If the compiler vendor knows the standard, they can add an xdata attribute and you will know if you should write the word first on the line, or before the star, or after the star or after the variable name. The far, near and huge attributes has survived the x86 era. Some compilers may have far, some compilers may have _far, and some may have __far. But the declaration still looks the same.
To you, xdata is a violation of the C standard, and a reason to ignore the standard. To me, the standard has already told Keil how to implement the xdata extension in a way that I will understand if it binds to the left or to the right.
(to be continued)
"Have you ever read the 'C' standard? yes (I have a K&R somewhere)
I guess that was a typo. Did you mean: "no (I have a K&R somewhere)"?"
how do you know that? again you throw out accusations that you have no basis whatsoever for
How else should I interpret your response? You seem to be confusing K&R with the standard, and the knowledge you display of the content of the standard doesn't give any indication that you have read it.
It is obvious from I do not give a hoot about portability. Yes, I know you're proud of your stance on that issue. that you have no concern for readability and efficiency since 'portability' is of such paramount importance to you.
I have great concern for readability - it is vital for maintainability. I'd say it is probably second on my list of requirements behind correctness.
'Efficient' code - by which I assume you mean code optimised for speed rather than readability - I only use where absolutely necessary. Perhaps you could try compiling with a higher optimisation level to reduce the frequency with which you need to hand optimise code? Or would that impede your 'development by debugger' coding technique too much?
And I take no pride in not giving a hoot about portability, the pride is definitely yours, since you have never argued against the fact that e.g. "a million" #if and #ifdef (to make code 'portable') will make code unreadable.
You have some odd ideas. Wrapping code in preprocessor directives doesn't make it portable - in fact, it makes it clear that it is non-portable.
not for the C51, but for the C166. It looked like world war three!
the point is that what you should work with is how your tool behaves, not with some 'standard' that might and might not apply.
our industry would have never reach ANY coding standards whatsoever? of course, any tool should adhere to existing stndards as far as possible/practical but here are a couple of examples of what I mean: For the benefit of "PC programmers" Keil '51 can do malloc() ARGH, have you seen how it works in the little resource starved '51? To make use of the unique facilities of the '51 Keil has introduced the BIT variable type.
Erik,
Don't you think that if tool vendors would have had this slogan ticking in their head...
our industry would have never reach ANY coding standards whatsoever?
ISO (formerly ANSI) Standard 'C' == Keil 'C'. you conveniently ignore that while "keil C" has a variable named BIT, you will not find that in the "ISO (formerly ANSI) Standard 'C'"
I guess that was a typo. Did you mean: "no (I have a K&R somewhere)"?" how do you know that? again you throw out accusations that you have no basis whatsoever for
It is obvious from I do not give a hoot about portability. Yes, I know you're proud of your stance on that issue. that you have no concern for readability and efficiency since 'portability' is of such paramount importance to you. And I take no pride in not giving a hoot about portability, the pride is definitely yours, since you have never argued against the fact that e.g. "a million" #if and #ifdef (to make code 'portable') will make code unreadable.
Just curious do you ever use BIT? it would make the code non-portable, so I guess you do not or, maybe, you obfusciate the code with some #if and #ifdefs.
Have you ever read the 'C' standard?
yes (I have a K&R somewhere)
I guess that was a typo. Did you mean:
"no (I have a K&R somewhere)"?
and no, of course not (it would be impossible)
Why the difference?
That I am much more concerned with "Keil '51 C" than the utterly stupid concept of insisting on "Real C" on a '51 is another thing.
Maybe pseudocode will help:
ISO (formerly ANSI) Standard 'C' == Keil 'C'.
I know you don't like it, but you will have to accept it. Quite what you mean by 'Real C' I've no idea.
If, in Keil C, a 'for' was called a 'because' I would not have a problem
Again, ISO (formerly ANSI) Standard 'C' == Keil 'C' so that is not a possibility.
(you would have cow, I'm sure)
Er, what?
that to me would just be "Keil '51 C" and that it was not "real C" would be no concern of mine.
Again, ISO (formerly ANSI) Standard 'C' == Keil 'C'.
I work with the tool I have, and know that tool, I could not care less what other similar tools do.
Unfortunately you don't seem to even know what the tool is. Again, ISO (formerly ANSI) Standard 'C' == Keil 'C'.
PS I do not give a hoot about portability
Yes, I know you're proud of your stance on that issue.
that would come under the heading "real C", not "Keil '51 C"
Again, ISO (formerly ANSI) Standard 'C' == Keil 'C'. That is real 'C'.
View all questions in Keil forum