We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Let me tell you a story about a guy named Jed...
A long long time ago (pre-ANSI C), in a galaxy far far away I had worked for a company that had to develop internal "C" coding standards and "Jed" worked on one aspect of the standard while I worked on another. We would hold weekly meetings to reconcile our differences. In attendance, we had other professionals for simple sanity checking and to gain insights from different points of view.
Chris was one of our attendees and was a very experienced software veteran who had plenty of code in various satellite systems orbiting our planet today. By then, Chris was in upper management and graced us with his wisdom when he could.
Well during one of our weekly meetings, "Jed" and I got into a simple disagreement on a Rule about header files. We were at an impasse, so we waited for Chris to arrive and have him make the final decision: about five of us professional engineers were in the room.
When Chris arrived, he heard the arguments, and quickly announced that I was right. (Hence, Jed was wrong).
Well, Jed freaked out and wanted to take the guy outside and teach him a lesson! ... Jed was red-faced, quickly stood up, even took a step towards Chris, and said "Chris, lets just step outside and settle this! I am right and you don't know what you're talking about!" etc etc.
The other attendees and I were duly impressed over Jed's technique of handling technical disagreements. Especially with upper management.
Instead of Jed trying to learn that he *might* be wrong, Jed leaped into the confrontation method of getting his way. Bullies do this because they lack the brain-power to reason through a disagreement. It is a childish trait.
Children are at a huge disadvantage when arguing with "an adult" (or somebody who is much smarter than they are) and they will become very frustrated over their strong desire to assert themselves and their inability to win the mental sparring. They will get physical and/or verbally abusive. Some people out grow this, and some don't.
I think Jed showed his 'abilities' quite well. I find that this is true with so many people on so many subjects. I've seen this behavior many times over. I've seen it here on this forum.
When an "Original Poster", asks a question and people try to answer it (after much refinement of the OP's question) you get these side-bar posts where somebody will start attacking another poster's efforts. And I mean 'attack' and not augment or refine.
I don't have a problem with correcting or clarifying others, or even the occasional sprinkling of sarcasm, but when it is ALWAYS devolves into some vindictive vitriol between a brisling poster and the rest of 'us,' I wonder if it is out of ignorance, malice, or some twisted form of self-entertainment. All three of which are adolescent behaviors. (en.wikipedia.org/.../Adolescence)
Since the regular players here are detail oriented and thus they are savvy enough to know who I'm talking about, I don't think I have to name names.
He is critical enough to figure it out himself, so I would expect that the offender would read this and ask himself if he is demonstrating Ignorance, Malice, Entertainment, or is he being an adult and providing a constructive post before he does so.
And, I hope his "Mea Clupea" (en.wikipedia.org/.../Mea_culpa) will be a silent one, because I'm kind of tired of reading his Hostile Postings (HP).
</rant> --Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
(continued)
The standard is not about forcing a compiler vendor into producing carbon-copy products, all being "exactly" identical and totally limited by hard rules. It is about making sure that all common parts of the different compilers behaves exactly as you expect they should. And the standard makes sure that extensions are added in a way that makes the extensions logical super-sets of the language.
Do get the C standard. It isn't expensive. Whenever you see a thread discussing syntax problems - pick up the standard and try to find the relevant sections. If you do, then you will notice that the answers are clearly writen and easy to find.
And that any such thread could actually be summed up as a Please read the manual, just as questions about "how do I initialize my watchdog?". If Please read the manual is a good answer is a separate issue, but the reason people ask questions is because they haven't read the correct documentation. If it us because they don't know what to read, doesn't understand the language or are lazy is a separate issue. But for anyone to be able to answer (knowing the answer and not just assuming they know the answer), some people really must have spent the time reading the ultimate datasheet.
A huge number of questions to this forum is because people haven't spent time with the documentation. But whenever people gets links to the Keil documentation, it is important to note that the Keil documentation isn't complete. It is just an addendum to read as follow-up to the ISO C standard.
When I initially coded C, everything was obvious. A + B could only mean one thing. But that isn't true. If you look at the standard, it spends a lot of paragraphs on explaining what the compiler must do to make sure that you don't get surprised when you try A + B. The standard worries about A being signed and B being unsigned, or A and B having different sizes. And it worries about the case when you assign the answer to a variable of different size. There are so many small details needed to get C to look obvious and generate "obvious" results.
Whenever the compiler vendor miss-reads the standard, you as the end user will get a big nose-bleed. If you know the standard, you know where the compiler vendor has erred. If you don't know the standard, you may assume that you did something wrong.
All changes Keil has made to their C51 compiler are so very tiny compared to the language standard, that it is more or less a non-issue. If you throw away the actual declaration and just look at the code using the sbin data type, it will look like standard C. It will almost fully behave as standard C. It will almost be portable. That the data type is sbit doesn't mean much for portability since the real portability issue isn't the sbit type but how another processor controls a port pin.
Following the standard isn't a question about allowing sbit or not. The most important part when it comes to the standard is if sbit is signed or not, and what happens if you try to assign -1, 0, 1 or 2 to it, i.e. if the behaviour follows the behaviour of the rest of the language, just as it is important to know what result you get (or if it will be undefined) if you assign a value from an int or unsigned int into a signed or unsigned char.
It isn't really the C standard that controls how portable a program is. It is more a question about how you access the actual hardware, or how you handle your variables. A program can be highly portable, while being written for a specific architecture if 95% of the source is generic and located in one set of files, and the last 5% is target-specific and located in different files. And the code can be very efficient. Or it can be portable by using #define blocks (but I don't much like #define blocks because of readability issues).
On the other hand, you can write code where every single line of code is intentionally written for the target architecture. But the code can still be very inefficient.
Portable and efficient are not mutually exclusive. In some cases you can have both. In some cases you can't.
For me, portable is to a large part how easy you can move the code to a different target, and how easy can you make sure that the ported code will produce the same results.
Your design controls how easy you can adapt the code for a different target. Your understanding of the C/C++ standards and "standard" portability issues such as size of an int, or byte order, affects how hard it will be to get the expected results out of the ported code.
Your projects may be so special that you have made the decision that you don't have to spend any time on making the design portable, and instead spend any design time into managing to get the hw to dance for you. But you should still spend time on the language side of portability, since that very much defines how portable _you_ are. Can _you_ be moved to a different architecture, and after reading the datasheets for the new processor start to produce working code? And will the code work because _you know_ that it will work, or because you run it an saw the result you hoped to see?
Per, Excellent post. Well said, well thought-out, and well executed.
--Cpt. Vince Foster 2nd Cannon Place Fort Marcy Park, VA
Your projects may be so special that you have made the decision that you don't have to spend any time on making the design portable, and instead spend any design time into managing to get the hw to dance for you. But you should still spend time on the language side of portability, since that very much defines how portable _you_ are. Can _you_ be moved to a different architecture, and after reading the datasheets for the new processor start to produce working code?
Per, anything will be more or less portable. I have reused code across platforms and will state as I have before "to port non-portable code is less effort than making the original code portable", not the least because at the time you write the original code, you usually do not know what it might be ported to some day. As an example if you write some code in C51 should you make the effort of making it portable to SDCC.
And will the code work because _you know_ that it will work, or because you run it an saw the result you hoped to see? as far as I am concerned if it works for any reason except "the code work because _I know_ that it will work" it does not work whatever the result of a 'test' might show. The internet is flooded wirh 'working' code that only works under the exact same circumstances as those of the original 'developer'.
Erik
PS vocabulary portable: code that without any change will compile and work when compiled by C51 SDCC and GNU non-portable: code that require nominal changes tto work when compiled by C51 SDCC or GNU
On this whole portability thing, I'm glad erik provided the glossary of terms, because when erik points out that portable code is rare, he is right in his definition of terms. I don't think ANY embedded professional would expect to write in pure "C". It would be simply smashing if they do.
All processor variants and their cohorts in the IDE business will have special options for squeezing every last performance drop out of the final executable code.
Be it speed or space, the need to optimize your code is driven by us, the customer. Imagine a pure "C" compiler (no deviations from the standard at all) for the 8051. To eek core performance out of it, we would complain about the gyrations we would need to go through to achieve 'assembly level' performance, and if they (Keil) would only provided a "C" variant extension called 'bit' or 'xdata' or 'data' like their competitor does, we'd be happy. We would be able to eek that performance out and be happy.
So they do. And so does their competition, etc. But as Per points out, at least these deviations are only a stones throw away from the main road/path that The Standard has carved out. And of course going from one processor to the next are still only a stones throw way from each other on the "C" level. Without knowledge of The Standard, how can any embedded professional determine if that stone was thrown, or shot out of a Howitzer?
I prefer to know my tools well, but at the same time, I don't want to NEED to know the tool in order to complete a task or project. If that Howitzer deviation must be taken, then I get a bit aggravated. Especially when I want my code to last years, and not be boxed into a corner due to a highly specific compiler path that cannot be logically reworked for a new or different compiler or even a new or different processor.
I consider it portable if there is 'minimal' impact on the code and 'minimal' impact on the coder to go from one platform to another. Centralizing the non-portables or high-risk code helps, and trying to write the code closer to the 'pure' C environment aids in making code portable.
So when we speak of 'portable' or 'nominal' or 'minimal' they are all subjective concepts, and we can go on-and-on-and-on refining what 'portable' means (and the derivative discussion on how to write it), but I doubt if there is going to be a clear-cut answer. To presume to hold The Answer is clearly spratter-brained.
So when we speak of 'portable' or 'nominal' or 'minimal' they are all subjective concepts, and we can go on-and-on-and-on refining what 'portable' means (and the derivative discussion on how to write it), but I doubt if there is going to be a clear-cut answer. To presume to hold The Answer is clearly spratter-brained. "ALL code is portable, the only difference is the amount of work involved in porting it"
so, Vince, we agree.
erik,
Yep. We agree.
"ALL code is portable, the only difference is the amount of work involved in porting it"
I guess I could have said it that way. (Its my fingers... they ramble on sometimes---at least that is what my attorney told me to say during the trial).
Vince, What trial are referring to? You may not believe me, but my team leader ordered me to change the following function name "MachineNotSafeForActiveSpreading" into something else (I chose "MachineNotReadyForActiveSpreading") out of fear of future litigation! (you see, just reminding "safety" in source code is a risk in term of future law suits if something goes wrong, when selling something in the US market!). Was your trial related to your software career ?
As much as I would thoroughly enjoy a real trial, my remarks were just intended as houmour.
I do believe you when it comes to such minor things as 'not safe' within source code as being a liability.
Hopefully you told your team leader that the function name gets changed once it is in fact safe. But I doubt it. And even then the ambulance chasers (and all of those attorneys are) would convince a jury that the 'old' code that had the words 'not safe' buried in a comment was somehow responsible for the collapse of the Roman empire.
When we discuss the US Market, attorneys and lawsuits, we start getting into politics, and I can warn you right now, I'd win that one. (I will also not respond to any political stuff, so don't start. I pointed it out because I would LOVE to say so much about that but it is inappropriate here. But, sadly, the US Market does have that problem though)
So, no trial.
Also, I am primarily an electrical engineer and then a software engineer. (Thats what we called them back then, not "CE/CS/??"). Most of my work as been in the R&D missile & aerospace industries, so 'we' don't get sued when we put in things like:
char Totally_Unsafe_And_Known_To_Kill( short victims ) { ... }
We just get shot in a firing squad. (No, Tamir, I we don't really get shot... it was a joke).
But when it comes to Human Safety Factors, it is unsettling when you know it is your stuff that can indeed kill an innocent victim if you screw up with your electronics/software engineering. You pay 'extra' attention to those 'little details.' But that is why we have these safety review boards! Oh, and that thing people call "STANDARDS" too.
(Note: Accidental Deaths/Injuries = 0)
Vince, Don't misunderstand me, but I'd never be able to do what you do. I am convinced that the systems you work(ed) on are probably some of the most fascinating devices we can imagine from a pure technical point of view, but I would simply not be able to contribute to killing people as part of my daily job. I didn't make a fuss of that function name; hell, I changed it. Of course, that would not change the fact that the subsystem is as deadly as a guided missile if you stick a hand or a head into it...
it out because I would LOVE to say so much about that but it is inappropriate here
Vince please, you have people here hurtling at each other stuff like "blabbering idiot", "smoked sardine", "liar", "crawl back into your can" etc. and you make a fuss out of a little politics :)
"...and you make a fuss out of a little politics..."
Okay, fine. I'll keep it short, but I think this isn't exactly the correct forum or for that matter the correct thread either. This will kill two posts with one reply (and I guess I like that)...
I would simply not be able to contribute to killing people as part of my daily job.
I understand. I get that all the time. "How can you do that?"
I usually point out that the systems, at one time not too long ago, used to kill many innocent people as collateral, but now they only kill a few... the right few.
A 100 rounds/bombs/missiles to take out 'the bad guy' now takes 1. Those missed 99 rounds hit 'other stuff' which can include 'the good guys'. Also those 100 rounds cost a bundle not only in material costs but also the logistics in getting them there, so the cost of that 1 is well worth it from the bean-counter perspective. In human life, the cost savings is incalculable.
So believe it or not, I view it as saving lives. "Those bad guys are killed but they planned to suicide-bomb a playground at high-noon anyway."
The objection most people have is misplaced. The *need* to use the weapons is really at issue.
Even so, I don't have a problem with killing the enemy. I even enjoy the YouTube videos of my stuff, in action. Makes me proud.
I do believe that I am on the correct side of the war (and any war the US engages in), and with justification. I'm not building the Chest-Belt Detonator 2000 for the next hospital or marketplace suicide bombing: ONLY bad guys do that.
But Tamir, I do respect your opinion. And the many others out there who also can't/won't do this kind of work.
Vince please, you have people here hurtling at each other stuff like "smoked sardine" please note query.nytimes.com/.../abstract.html defines 'sprat' as a *** fishy thing.
and I did not come up with that monniker (Sprat)
Nice catch there.
When you browse through the "*** fishy" communications, you come away with a slimey feeling that Sprats are hostile when they are not in their native environment. Must be a defense mechanism.
"[...] because all ARM registers are 32 bit, 2 instructions are required to test a bit: a shift to right, then a separate instruction to test the value.
it is much faster to use a 32 bit integer as a container for your bit fields."
Tamir: bit fields did not enter the standard because of ultimate speed - they are not intended to overlap bits in SFR - but because they allow you to trade data size for code size when you have many state variables that each requires a very limited numeric range.
I can have 256 alarm type definitions, where each definition contains a bit-field with flags for 'speech-connecting', 'requires acknowledge','number of repeats', ... Having 5 one-bit fields and one 3-bit field would save 5 bytes / alarm type definition compared to using an unsigned char for each info.
Using bit fields instead of manually performing bit operations makes the code easier to read, as you can see from the following examples. And it make it easy to change the size of the fields depending on changed requirements, without having to look into a number of helping constants or looking into the individual source lines.
The bit field will generate the same code as if I manually perform bit operations but the code will look nicer if I write:
if (alarmtype->speech_connecting) enable_speech();
than if I write:
if (alarmtype->flags & SPEECH_CONNECTING) enable_speech();
And the readability will improve even more when having bit fields of size larger than one, i.e.:
if (dial_count >= alarmtype->max_dial_count) fail_alarm();
than if I have to write:
if (dial_count >= ((alarmtype->flags >> DIAL_COUNT_SHIFT) & DIAL_COUNT_MASK)) fail_alarm();
If I really have room to store all state variables in 8, 16 or 32 bits, then I don't need bit variables.
On a PC, I might have plenty of memory. But the reduced data size from using bit fields (or manually perform the bit operations) may allow the data to fit in the data cache, resulting in faster code. And the concurrent processing of multiple instructions may hide the extra code needed for extracting the bit field.
Erik: "please define 'raw'".
My definition of raw memory structures, is to transmit or store data in the exact format that the compiler puts the data in memory. Such data will have much of it's format defined by the compiler vendor, not by you or the person responsible for the other side of a communication link. And since the compiler vendor has the full right to change that definition, you can not be in ownership of a document that correctly document the data format used.
Me: "Transmitted or stored data should be described by a 100% complete document" Erik: "it is, of course, how else could i use it?"
It can't be 100% documented if it relies on mechanisms that the compiler vendor may change between different releases of the compiler, or that are likely to fail if the source code is built with another vendors compiler.
To be 100% documented, the document must specify the actual bit location of every single bit. And the source code must make sure that the information is really placed at that bit position and doesn't just get placed there by chance because the current compiler because of some private design decision chooses that location.
There is no problem using any kind of byte order for a transmission, as long as you have a document that says that little-endian is always used, or that bit 0x40 of the third transmitted byte (before any endian byte-swappings have been performed) in message xx specifies which - of two possible - endian alternatives that is used. Just relying on memcpy() will not enforce the required endian. If you know that your processor has correct byte order memcpy() when writing may do the job, but what happens if the code is run on a different processor?
Transmitting bit fields (as oposed to manually handled flags) will always be borked since you can't write a documentation that takes into account possible future changes of a compiler.
If the other side is transmitting a raw bit-field, then you have to try to deduce the current location of these fields, while living with the knowledge that a changed compiler on the other end may require you to require your side of the communication. If the coder (or technical lead) on the other side of the communication link was a fool, you will have to suffer, since both sides will - by implication - be non-portable.
Using bit fields inside code gives cleaner code. But a lot of developers intentionally selects to manually assign the bits, just to avoid the extra work of having to write conversion functions "flags_to_native" and "native_to_flags" when they need to share information, or store he information on a medium where it may later be read by an application built with another compiler or built for another architecture.