I have a case where I need to count up a variable for more than 255 for a unsigned char.
I have a friend who works on 8-bit MCUs and swears that you should not use anything but 8 bit data types (like the unsigned char) for optimal code speed.
So to count up using chars as the datatype, I have to increment one char and when it overflows you increment the second char until you hit the timing you want. (This is precisely how timers are operated in C51, hi-byte / lo-byte etc..).
Is this an old wives tale? Is it really that inefficient to use an unsigned int? Does the compiler more or less break an int down in to two chars for arithmetic purposes anyway?
I've got it wired up using the two chars, but I have used an int in other circumstances as well.
For the amount of hand-wringing in the code, if you just used an unsigned int, you end up with less code that will make more sense to the next guy to look at it.
a comment that may apply: I (and others) started with C doing boilerplate code because we know no better, then, as we learned more, we went to advanced/fancy C, then we realized that was impossible/difficult to debug and now we are writing boilerplate C again.
.....
Ha -- well said. Yes it does make you feel like you aren't doing something "right" when you just write such simple logic for these MCUs -- but every time I have tried doing fancy stuff it runs slower (like doing multiple bit-shifts in a loop, turns out you can only bit shift once per instruction...).
I think you misunderstood. Yes, efficient code is important and boilerplate tend to make that. BUT the reason for boilerplate is MAINTAINABILITY. Trying to maintain/debug fancy code you spend time and time and time on "what exactly does this line do", not to mention that a little (wrong) detail is easily hidden in fancy code.
stressing a point: ode that works but is not maintainable will, invariably, turn out to not work.
priorities 1) maintainability 2) efficiency 3) compactness