This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

What optimizations should be expected from a C compiler for small uPs?

My knowledge of current compiler optimization technology is very limited (or ancient).
Am familiar with VHDL and Verilog for FPGA chips where extreme optimization is typical (dead code removal, code duplication to meet performance constraints, morphing from the language constructs into available hardware constructs).

In the context of the large variety of small microprocessors (of 8, 16 or 32 bits), each with unique peripheral collections; what would raise the coding to higher levels of abstraction, given one is still dealing with IO ports and peripherals?

  • "what would raise the coding to higher levels of abstraction, given one is still dealing with IO ports and peripherals?"

    I'm sorry - I don't understand that question!

    In answer to the title of your post, "What optimizations should be expected from a C compiler for small uPs?", you shouldn't just make arbitrary assumptions - you should read the appropriate Documentation to find out!

    For Keil C51, it's here:
    http://www.keil.com/support/man/docs/c51/c51_optimize.htm

  • what would raise the coding to higher levels of abstraction, given one is still dealing with IO ports and peripherals?

    I guess you can always wrap things in a container - assuming it is not optimized-away by the compiler :)
    Depending on the application, that may be a good idea.

  • Thanks for the link.

    Trying to understand if it's possible to unify sbit variables with mask & write for non-bit IOs. Probably a detour that don't need to follow at this time.

  • What optimizations should be expected from a C compiler for small uPs?

    what would raise the coding to higher levels of abstraction, given one is still dealing with IO ports and peripherals?
    "dealing with IO ports and peripherals" has absolutely nothing to do with the compiler. It has to do with the fact that in the uC world the processor, not the programmers convenience, has priority. "Abstracting" I/O is not a compiler issue, it is an OS issue and, for the sake of efficiency all sensible (my opinion) uC projects do not employ an OS.

    Erik

  • Three of many perspectives on high level abstraction/optimization:

    Ran across Arch D. Robison's
    "Impact of Economics on Compiler Optimization"
    portal.acm.org/citation.cfm?id=376751
    www.eecg.toronto.edu/.../arch.pdf
    while searching on "extreme compiler optimization"

    There is the ADA Ravenscar Profile for folding the RTOS into the application code.
    en.wikipedia.org/.../SPARK_programming_language
    and "Guide for the use of the Ada Ravenscar Profile in high integrity systems"
    www.sigada.org/.../ravenscar_article.pdf

    In VHDL or Verilog one writes the code and then uses constraints to force timing and pin allocation. In theory one could write a functional program and constrain it into a particular setting (probably a thesis project). Or more to the point, write a functional VHDL program and constrain it to run in a 8051.

  • "Trying to understand if it's possible to unify sbit variables with mask & write for non-bit IOs."

    It was a rather round-about way to ask that question!

    The bit-addressable stuff is very specific to the 8051, so it's not really going to be portable at all.

    And not all 8051 data is bit addressable - so you can't even really have anything "generic" just for the 8051.
    It's a feature of certain addresses that you can choose to either use or not.

    It's The normal tradeoff: you can take advantage of all the specifics of the architecture, and gain its maximum performance - or can can stick to generics that'll work anywhere and give you portability.

    This doesn't really have anything to do with compiler optimisation - it's about how you write your code in the first place.

  • What is a rapple?

    a red apple ?

    an apple that raps?

    other?

  • What is a rapple?

    incomplete erasute of RE: fololowed by 'apple'

  • It would be unlikely for a compiler to recognize particular idioms for the bitwise operators and turn those into bit instructions. In theory, the compiler could see

    P1 = P1 | 0x01;

    and notice that P1 was bit addressable, the net effect is just to set one bit, and generate the appropriate SETB instruction. In practice, I wouldn't expect it. If I write |= 0x3, |= 0x7, |= 0x55, etc, when does the compiler switch to a byte write instead of a series of SETB's?

    I can imagine it's even a good idea not to use the bit operations, just to leave the programmer the flexibility to code either the byte-wide operation or the bit-wise one as he chooses. There's no guarantees there, of course. That leads us to the realm of a #pragma so that the programmer can choose the implementation of the C statement.

    There might be a better argument for bitfields that are one bit wide. Generated code for bitfields is usually bad (in my past experience) which leads to a vicious cycle of the language construct being ignores, which leads to it not being particular well supported and optimized.

  • Generated code for bitfields is usually bad (in my past experience) which leads to a vicious cycle of the language construct being ignores, which leads to it not being particular well supported and optimized.
    I wholehardedly agree re the bitfields discussed in a C book; however I totally disagree re SFR bits. There is no way to make that "implemntation dependent" and thus what you say above do not apply.

    Erik

  • "It would be unlikely for a compiler to recognize particular idioms for the bitwise operators and turn those into bit instructions ... when does the compiler switch to a byte write instead of a series of SETB's?"

    See: http://www.keil.com/forum/docs/thread11894.asp

  • I wouldn't be surprised to find that it involves at least some relation to the price of the compiler...