Could someone please explain the following: I tried using sizeof command to find out the size of the an int variable by printf("%d\n", sizeof(int)); I was expecting an output of 2. But interestingly the value that I got was 512. Can anyone explain the significance of this value? I tried testing the the size of other variables and I discovered something interesting. 512 in decimal is actually 0200 in hexadecimal. Ignoring the last two bytes, the first two bytes actually gives the value 2 which I was expecting. Similarly, for the size of a character, the value I got was 256. When converted to hexadecimal it is 0100. Looking at the first two bytes actually gives the data size. Could someone please enlighten me why is this so?
"I did try and find sizeof in the manual without success." Trouble is, the behaviour of sizeof is standard and, therefore, not covered in the manual; it's the behaviour of printf that's implementation-dependent - and that's what's explained in the manual. All credit to you for looking in the manual anyway, though!
"Trouble is, the behaviour of sizeof is standard and, therefore, not covered in the manual" I think the problem is that neither Keil's implementation of variadic functions nor Keil's implementation of the sizeof operator conform to the standard: 1) char is not promoted to int for a variadic function. 2) sizeof does not always evaluate to the same type. The standard requires sizeof to evaluate to a size_t, whereas it actually evaluates to either a char or an int depending on the situation. "All credit to you for looking in the manual anyway, though!" In my defence it was at least a slightly out of date copy! Stefan
"1) char is not promoted to int for a variadic function." Doesn't this depend on whether you've enabed the "ANSI Integer Promotion?"
"Doesn't this depend on whether you've enabed the "ANSI Integer Promotion?"" I'm not sure - the integer promotion business relates to the 'unary conversions' applied to the operands of certain operators. I think that function argument promotions are a separate issue. Having said that, I'm going to test it... Stefan
Ok, I've tested it and it doesn't make any difference to printf() arguments. In either case you have to use an explicit cast to int or use the 'b' flag whan printf()ing a char. Something else that's worth noting about Keil's printf() is that the %c specifier expects a char rather than an int unlike stardard 'C'. Stefan
Something else that's worth noting about Keil's printf() is that the %c specifier expects a char rather than an int unlike stardard 'C'. Standard C (as opposed to stardard C) also expects an int. :-) The Keil printf function expects a char, but this is documented in the manual: http://www.keil.com/support/man/docs/c51/c51_printf.htm Jon
"Something else that's worth noting about Keil's printf() is that the %c specifier expects a char rather than an int unlike stardard [sic] 'C'." Actually, the ISO 9899:1990 stardard [sic] defines an int to be an object that, "has the natural size suggested by the architecture of the execution environment". So a C51 int should really be 8 bits, since the 8051 is an 8-bit processor... he said contentiously from a safe distance... ;-)
"So a C51 int should really be 8 bits, since the 8051 is an 8-bit processor... Except for the requirement that an int must be able to support at least the absolute magnitude for INT_MIN and INT_MAX as defined in limits.h. Therefore, an ISO-conforming implementation cannot represent an int type in only eight bits.
So a C51 int should really be 8 bits, since the 8051 is an 8-bit processor... Except for the requirement that an int must be able to support at least the absolute magnitude for INT_MIN and INT_MAX as defined in limits.h. Therefore, an ISO-conforming implementation cannot represent an int type in only eight bits. Geez. I'm glad we don't make a compiler for a 4-bit architectures. Jon
"ISO 9899:1990 defines an int to be an object that, 'has the natural size suggested by the architecture of the execution environment'" But also, "an ISO-conforming implementation cannot represent an int type in only eight bits." Thus we have an inherent conflict on an 8-bit platform! So the question is: do you prefer your compiler to be strictly ISO-conforming, or do you want it to be well-suited to its 8051 target? Of course, there is no answer to that! Different people and different projects will have different priorities. We just have to hope that Keil have made a decent set of compromises, and supplied a decent set of options - and put up with the peculiarities such as this where the conflict cannot be easily resolved! This requires a lot of Please read the manual! As Jon said, don't even mention 4-bitters...! ;-)
Thus we have an inherent conflict on an 8-bit platform! Not really. The "natural size" is not actually a requirement or defintion, but just a suggestion. It can't be a requirement anyway: "natural size" is way to sloppy a word for such usage. The lower boundarys for INT_MAX and -INT_MIN on the other hand are strict requirements. So, in this case, the strict requirement simply overrules the suggestion. Keil is fully correct here, using 16-bit ints even though it's an 8-bit platform. Keeping in mind the minimal requirements of INT_MAX and INT_MIN, 16-bit ints are the "natural size suggested by the execution environment".