Dear all, I met a problem due to big-endian of 8051. The phenomenon is as follows:
32-bit value A[4] is in XDATA space Ex. offset 0 = 0x12; offset 1 = 0x34; offset 2 = 0x56; offset 3 = 0x78
I used the following to get this value: *((UINT32 *) 0);
For 8051 we got "0x12345678" because of big-endian...
Is there any "efficient"(favor size) method to transform it to 0x87654321 ?
I tried to use: A[0]+A[1]<<8+A[2]<<16+A[3]<<24 but the "machine code" gets bigger...
Thanks !
Thanks for all your information...
The story is: - Windows application passes 1 array to flash drive. Here array is like A[4] = {0x11,0x22,0x33,0x44}; - After reading flash and get that array to B array within application, B[4] = {0x11,0x22,0x33,0x44};
- But within flash drive firmware(MCU is 8051) read this array into 32-bit XDATA variable C => C=0x11223344 ! - if Application does the same task then AP got 0x44332211 because (maybe) Intel CPU and OS is little-endian...
Thus application and firmware "see" different data and cause conflict...
Of course application can be modified for this case but I just want to know if firmware can solve this in efficient way(favor size) because little GAP...
As Per said, the format of the data sent to and received from the flash drive should be defined in the interface specification for the drive.
it is then up to you to ensure that both your Windows driver writes according to that specification, and your 8051 "driver" reads according to that specification.
clearly, one of them is broken!