Dear all, I met a problem due to big-endian of 8051. The phenomenon is as follows:
32-bit value A[4] is in XDATA space Ex. offset 0 = 0x12; offset 1 = 0x34; offset 2 = 0x56; offset 3 = 0x78
I used the following to get this value: *((UINT32 *) 0);
For 8051 we got "0x12345678" because of big-endian...
Is there any "efficient"(favor size) method to transform it to 0x87654321 ?
I tried to use: A[0]+A[1]<<8+A[2]<<16+A[3]<<24 but the "machine code" gets bigger...
Thanks !
Surely the access in little endian produces a result of 0x78563412.
Probably the most efficient way (space wise) to do this is to use a simple assembler function.
Are you sure that you have all the operator precedence & association correct there?
For clarity if nothing else, I suggest you split it onto multiple lines - that won't necessarily add any overhead to the generated code...
"Probably the most efficient way (space wise) to do this is to use a simple assembler function."
Compiler-generated code for this kind of thing may well not be inefficient at all - see: www.8052.com/.../162353
"Compiler-generated code for this kind of thing may well not be inefficient at all..."
The keyword there is surely may. Due to compiler versions, optimization levels etc, it cannot be guaranteed.
Whereas, a little (and simple) piece of assembler would have fixed and predictable results.
Heck, for this function, it would be a case of passing parameters in registers, shuffling around a bit and returning the result in registers.
I sometimes think that the art of assembler is just fading away :(
Calling the C51 big- or little-endian is a bit hard, given the extreme lack of instructions larger than 8 bits.
In this case, it is a question of compiler vendor decisions, i.e. if their 32-bit and 16-bit emulation should be big- or little-endian.
But a big question for the C51 is: Where do you get your 32-bit number from, and where is it going to be sent. In short - why does it matter if the processor is big- or litt-endian?
For internal use, it shouldn't matter unless you are about to implement a big-number library or similar. When communicating using a protocol you have to follow the standard of the protocol. When designing new protocols, it is often better to define the protocol so the little guy don't need any conversions and possibly have the other side (maybe a PC) swap the bytes. On the other hand, the swap time is often not important - remember that making actual computations on a 32-bit number requires quite a number of assembler instructions...
Thanks for all your information...
The story is: - Windows application passes 1 array to flash drive. Here array is like A[4] = {0x11,0x22,0x33,0x44}; - After reading flash and get that array to B array within application, B[4] = {0x11,0x22,0x33,0x44};
- But within flash drive firmware(MCU is 8051) read this array into 32-bit XDATA variable C => C=0x11223344 ! - if Application does the same task then AP got 0x44332211 because (maybe) Intel CPU and OS is little-endian...
Thus application and firmware "see" different data and cause conflict...
Of course application can be modified for this case but I just want to know if firmware can solve this in efficient way(favor size) because little GAP...
As Per said, the format of the data sent to and received from the flash drive should be defined in the interface specification for the drive.
it is then up to you to ensure that both your Windows driver writes according to that specification, and your 8051 "driver" reads according to that specification.
clearly, one of them is broken!