Does this code port well to a 16 bit machine (written on 32 bit machine)?
int code = 0x10 ; int marker = 0x20 ; unsigned long frame; int main(void) { frame = (unsigned short)~marker<<11 | (unsigned short)code<<5 | marker; frame|= (unsigned long)~code<<16 ;
I see that
frame = (unsigned short)~marker<<11
gives 0x7FEF800 on a 32 bitter, but why does not the cast remove the higher word? I see the compiler using the BIC instruction (with 0x1F0000) rather than shifts to perform the cast. You are right in the end I get 0x7EFFA20. But I would expect 0xFA20 regardless of the platform. Where am I wrong?
Precedence, and a bunch of assumptions about what type the computation is occurring in.
You should also consider using typedefs to ensure the bit widths that are not compiler/architecture dependent.
"You should also consider using typedefs to ensure the bit widths that are not compiler/architecture dependent."
Alas, the OP's toolset is "None", but since I read comments about 32-bit, I could assume ARM and in which case, there is stdint.h which should provide width-specific type definitions.
I see no precedence issues here - the standard clearly indicates this order: bitwise not, cast, shift. The issue here is the type of argument of the shift operation.
frame = (unsigned short)(~marker<<11); // 0000F800 rather than 07FEF800
See what I did there? Awesome, it's call precedence, and I changed it, so now the answer generated matched the expectation, rather than the order the compiler does it in the original case, which as you say is well defined, but not what the OP wanted.
Tamir,
EngageBrain(); // <<< This is a critical step! do { ReadMessage(); } while (!Understood); DigestReadInformation(); if (FullyUnderstood) WriteResponse();