I have a case where I need to count up a variable for more than 255 for a unsigned char.
I have a friend who works on 8-bit MCUs and swears that you should not use anything but 8 bit data types (like the unsigned char) for optimal code speed.
So to count up using chars as the datatype, I have to increment one char and when it overflows you increment the second char until you hit the timing you want. (This is precisely how timers are operated in C51, hi-byte / lo-byte etc..).
Is this an old wives tale? Is it really that inefficient to use an unsigned int? Does the compiler more or less break an int down in to two chars for arithmetic purposes anyway?
I've got it wired up using the two chars, but I have used an int in other circumstances as well.
For the amount of hand-wringing in the code, if you just used an unsigned int, you end up with less code that will make more sense to the next guy to look at it.
Ha -- well said. Yes it does make you feel like you aren't doing something "right" when you just write such simple logic for these MCUs -- but everytime I have tried doing fancy stuff it runs slower (like doing multiple bit-shifts in a loop, turns out you can only bit shift once per instruction...).
I have to say, there is a guy who runs a software shop next to my office and he wrote C for 25 years, so I ask him advice on system architecture and big picture stuff.
His shop went C# / Windows GUI for their core product ten years ago, so they are an OOP shop now, but he is awesome when I see crazy stuff writing bare metal C (ie: like the first time, I had a buffer overflow due to calling an invalid array element due to a counter not resetting, and wondering why a different is changing in the debugger -- you don't really see that writing PHP!)
We were talking about the eternal debate of global variables vs. pointers, and if you look over GitHub at really fancy libraries they are enormously complex pointer based functions.
I inherited a PIC codebase using global variables and mostly void methods acting on the global variables. I ported that over to C51 and built a new board using a new MCU.
I was considering doing a re-write of an established working system (global variable / extern type program) and pointerizing it. I asked him what. He said avoid pointers like the plague.
I ported over two libraries to C51 from AVR C. These libraries are super-modularized pointer code, and it was excruciating to port them.
I will say it is nice to bring in a C library and you just point to an array and pass it to the library and it just works and modifies the array without you passing variables -- so I get the logic if you are writing a library for other people to use. But man, porting them is very time intensive if you aren't using the platform the library is written in. It is very unclear where and what is getting modified.
a comment that may apply: I (and others) started with C doing boilerplate code because we know no better, then, as we learned more, we went to advanced/fancy C, then we realized that was impossible/difficult to debug and now we are writing boilerplate C again.
.....
Ha -- well said. Yes it does make you feel like you aren't doing something "right" when you just write such simple logic for these MCUs -- but every time I have tried doing fancy stuff it runs slower (like doing multiple bit-shifts in a loop, turns out you can only bit shift once per instruction...).
I think you misunderstood. Yes, efficient code is important and boilerplate tend to make that. BUT the reason for boilerplate is MAINTAINABILITY. Trying to maintain/debug fancy code you spend time and time and time on "what exactly does this line do", not to mention that a little (wrong) detail is easily hidden in fancy code.
stressing a point: ode that works but is not maintainable will, invariably, turn out to not work.
priorities 1) maintainability 2) efficiency 3) compactness