When I try to print data such as '0x16', the display shows '0x1600'. I then tried to print 0x16 in decimal and the output is 5632, which is the decimal equivalent of 0x1600. I am using a trial version of MicroVision. Does anyone know why I am having this problem and how I could fix it?
Shannon, Perhaps I could help if you showed the 'C' code you were using that created the results you indicated (assuming that you're writing your app in 'C', that is). Dave.
One common problem: what's the printf() format string? Because the 8051 is an 8-bit processor, Keil C51 does not, by default, do "integer promotion" for byte-sized variables when passed as parameters. Accordingly, printf() has a new length modifier to indicate 8-bit values, e.g "%bx" or "%bu" instead of "%x" and "%u". The latter two imply 16-bit values, the former 8-bit values. printf() is a variable-argument-length routine, and so the only way it has to know the size of the argument is the format specifiers. If you pass 8 bits to printf() but use a 16-bit format specifier, the routine will try to print 2 bytes, and you'll get unexpected answers.
Thanks Drew! Your suggestion really helped us a lot. My team partner and I have spent way too much time trying to figure out that problem. Also, thanks Dave for replying to my post!