My knowledge of current compiler optimization technology is very limited (or ancient). Am familiar with VHDL and Verilog for FPGA chips where extreme optimization is typical (dead code removal, code duplication to meet performance constraints, morphing from the language constructs into available hardware constructs).
In the context of the large variety of small microprocessors (of 8, 16 or 32 bits), each with unique peripheral collections; what would raise the coding to higher levels of abstraction, given one is still dealing with IO ports and peripherals?
Three of many perspectives on high level abstraction/optimization:
Ran across Arch D. Robison's "Impact of Economics on Compiler Optimization" portal.acm.org/citation.cfm?id=376751 www.eecg.toronto.edu/.../arch.pdf while searching on "extreme compiler optimization"
There is the ADA Ravenscar Profile for folding the RTOS into the application code. en.wikipedia.org/.../SPARK_programming_language and "Guide for the use of the Ada Ravenscar Profile in high integrity systems" www.sigada.org/.../ravenscar_article.pdf
In VHDL or Verilog one writes the code and then uses constraints to force timing and pin allocation. In theory one could write a functional program and constrain it into a particular setting (probably a thesis project). Or more to the point, write a functional VHDL program and constrain it to run in a 8051.
What is a rapple?
a red apple ?
an apple that raps?
other?
incomplete erasute of RE: fololowed by 'apple'
It would be unlikely for a compiler to recognize particular idioms for the bitwise operators and turn those into bit instructions. In theory, the compiler could see
P1 = P1 | 0x01;
and notice that P1 was bit addressable, the net effect is just to set one bit, and generate the appropriate SETB instruction. In practice, I wouldn't expect it. If I write |= 0x3, |= 0x7, |= 0x55, etc, when does the compiler switch to a byte write instead of a series of SETB's?
I can imagine it's even a good idea not to use the bit operations, just to leave the programmer the flexibility to code either the byte-wide operation or the bit-wise one as he chooses. There's no guarantees there, of course. That leads us to the realm of a #pragma so that the programmer can choose the implementation of the C statement.
There might be a better argument for bitfields that are one bit wide. Generated code for bitfields is usually bad (in my past experience) which leads to a vicious cycle of the language construct being ignores, which leads to it not being particular well supported and optimized.
Generated code for bitfields is usually bad (in my past experience) which leads to a vicious cycle of the language construct being ignores, which leads to it not being particular well supported and optimized. I wholehardedly agree re the bitfields discussed in a C book; however I totally disagree re SFR bits. There is no way to make that "implemntation dependent" and thus what you say above do not apply.
Erik
"It would be unlikely for a compiler to recognize particular idioms for the bitwise operators and turn those into bit instructions ... when does the compiler switch to a byte write instead of a series of SETB's?"
See: http://www.keil.com/forum/docs/thread11894.asp
I wouldn't be surprised to find that it involves at least some relation to the price of the compiler...