I have a case where I need to count up a variable for more than 255 for a unsigned char.
I have a friend who works on 8-bit MCUs and swears that you should not use anything but 8 bit data types (like the unsigned char) for optimal code speed.
So to count up using chars as the datatype, I have to increment one char and when it overflows you increment the second char until you hit the timing you want. (This is precisely how timers are operated in C51, hi-byte / lo-byte etc..).
Is this an old wives tale? Is it really that inefficient to use an unsigned int? Does the compiler more or less break an int down in to two chars for arithmetic purposes anyway?
I've got it wired up using the two chars, but I have used an int in other circumstances as well.
For the amount of hand-wringing in the code, if you just used an unsigned int, you end up with less code that will make more sense to the next guy to look at it.
if counting in an int give you trouble (which is very unlikely) you have chosen the wrong processor. the only place you need be careful is where atomicity could be an issue.
PS if you choose to count "hichar" and "lochar" atomicity issues still apply.
SO if less that 255 max always use char, if more use int
@Erik
Good to see you!
Not entirely on topic -- care to opine? Here is another one I struggled with this week:
I have function void DoSomething(X[],Y[],Z[]); -- it takes three arrays variables (not by reference).
I need to call this function with like X[5], Y[7], Z[9) -- something goofy like that. And every array needed some different increment logic and it has to loop three times. (Part of an RF decoding routine).
I would have had to put in a few layers of loops to call the function. Or I could just hardcode it and call it the three times in a row. (You would end doing some arithemetic all over the place).
The computer scientist in me hates not putting it in a loop, but it actually seems faster from an instruction cycle standpoint just to hardcode the array values.
On hand this is three lines of code:
DoSomething(X[3],Y[9],Z[3] DoSomething(X[5],Y[22],Z[6] DoSomething(X[2],Y[34],Z[9]
Could you stomach calling a function like that 3 times in a row, vs. putting it in a loop and increment all kinds of counters?
I think it is the right 8051 play to just hardcode it.
In my opinion that question is not 8051 specific. In this case, just 3 calls in a row, I would prefer that. It is simpler and easier to handle if doing unit tests, creating test cases etc. But not many are doing unit tests nowadays.
An additional loop around with additional varibles would overcomplicate it, and creates more test cases in the unit test. In either case the code needs good comments.
Thats the opinion of a 'computer scientist' working 25 years with secure embedded applications.
Could you stomach calling a function like that 3 times in a row, vs. putting it in a loop and increment all kinds of counters? if it is 3 and the loop has to change all kinds of variables, absolutely
a comment that may apply: I (and others) started with C doing boilerplate code because we know no better, then, as we learned more, we went to advanced/fancy C, then we realized that was impossible/difficult to debug and now we are writing boilerplate C again.
Ha -- well said. Yes it does make you feel like you aren't doing something "right" when you just write such simple logic for these MCUs -- but everytime I have tried doing fancy stuff it runs slower (like doing multiple bit-shifts in a loop, turns out you can only bit shift once per instruction...).
I have to say, there is a guy who runs a software shop next to my office and he wrote C for 25 years, so I ask him advice on system architecture and big picture stuff.
His shop went C# / Windows GUI for their core product ten years ago, so they are an OOP shop now, but he is awesome when I see crazy stuff writing bare metal C (ie: like the first time, I had a buffer overflow due to calling an invalid array element due to a counter not resetting, and wondering why a different is changing in the debugger -- you don't really see that writing PHP!)
We were talking about the eternal debate of global variables vs. pointers, and if you look over GitHub at really fancy libraries they are enormously complex pointer based functions.
I inherited a PIC codebase using global variables and mostly void methods acting on the global variables. I ported that over to C51 and built a new board using a new MCU.
I was considering doing a re-write of an established working system (global variable / extern type program) and pointerizing it. I asked him what. He said avoid pointers like the plague.
I ported over two libraries to C51 from AVR C. These libraries are super-modularized pointer code, and it was excruciating to port them.
I will say it is nice to bring in a C library and you just point to an array and pass it to the library and it just works and modifies the array without you passing variables -- so I get the logic if you are writing a library for other people to use. But man, porting them is very time intensive if you aren't using the platform the library is written in. It is very unclear where and what is getting modified.
.....
Ha -- well said. Yes it does make you feel like you aren't doing something "right" when you just write such simple logic for these MCUs -- but every time I have tried doing fancy stuff it runs slower (like doing multiple bit-shifts in a loop, turns out you can only bit shift once per instruction...).
I think you misunderstood. Yes, efficient code is important and boilerplate tend to make that. BUT the reason for boilerplate is MAINTAINABILITY. Trying to maintain/debug fancy code you spend time and time and time on "what exactly does this line do", not to mention that a little (wrong) detail is easily hidden in fancy code.
stressing a point: ode that works but is not maintainable will, invariably, turn out to not work.
priorities 1) maintainability 2) efficiency 3) compactness