My knowledge of current compiler optimization technology is very limited (or ancient). Am familiar with VHDL and Verilog for FPGA chips where extreme optimization is typical (dead code removal, code duplication to meet performance constraints, morphing from the language constructs into available hardware constructs).
In the context of the large variety of small microprocessors (of 8, 16 or 32 bits), each with unique peripheral collections; what would raise the coding to higher levels of abstraction, given one is still dealing with IO ports and peripherals?
"It would be unlikely for a compiler to recognize particular idioms for the bitwise operators and turn those into bit instructions ... when does the compiler switch to a byte write instead of a series of SETB's?"
See: http://www.keil.com/forum/docs/thread11894.asp
I wouldn't be surprised to find that it involves at least some relation to the price of the compiler...