I have tried variations using sizeof() with unreliable results. This method works, but is it bogus? Can it be improved? I am not worried about it being portable.
// -- Unit Variables -- struct { // Complex Structure // Lots of nested arrays, // integer values, etc. } message; char replyBuffer[20]; // Input Buffer // Return size of message structure unsigned int getMessageSize(void) { int i, *p1, *p2; p1 = (int *)&message; // Create pointer to Message Struct p2 = (int *)&replyBuffer; // Create pointer to replyBuffer i = p2-p1; // Calculate message structure size return(i); // Does this really work? }
Sizeof() is a compile-time constant. If sizeof() has a bug, then the compiler doesn't know big your stuctures are, which means that it's highly likely that the linker won't allocate space for them properly. In short, your program most likely wouldn't work correctly. Why does
unsigned int getMessageSize () { return sizeof(message); }
typedef struct { // complicated declarations } Message; struct { Message m; U8 justBeyondM; } SizeofMessage;
&(((SizeofMessage*)0)->justBeyondM)
I tested this and found that sizeof returned a value that was exactly twice the value of the calculated sum.
i = sizeof message; // i = 0x63C6 i = p2-p1; // i = 0x31E3 (using integer pointers)
sizeof returned a value that was exactly twice the value of the calculated sum. ... So in a nutshell, you asked sizeof() a different question than the one you wanted the answer of, sizeof() gave the correct answer, and you ended up blaming the messenger. Good we were able to clear that up.
"I was thinking of bytes as 16 bits." False premises do tend to lead to incorrect conclusions... ;-)
Technically, sizeof returns the size of a type in multiples of the sizeof(char). Note that while sizeof(char) == 1 (by definition), it's not necessarily true that sizeof(char) is one 8-bit byte. That is, however, almost always true with typical platforms these days, so people tend to forget that a char might not be stored in one byte, thus providing glorious opportunities for pedantry. The difference in int pointers gives you the difference in multiples of sizeof(int). Recall that pointer arithmetic does not operate in units of bytes, but in sizeof(type pointed to). If you want to do byte arithmetic (sizeof(char) arithmetic), you need to cast to an integer. Thirty or forty years ago, the size of bytes used to vary. (That's one reason the IEEE likes to use the word "octet".) But I haven't heard anyone debate the point in a long time. Bytes are always 8 bits among the people I talk to. People I know use the word "word" to describe longer sequences of bits. Some people like to insist that a "word" must be exactly 16 bits (and thus use terms like "dword" for 32 bits); others think of it as the native width of the data bus, which makes the meaning context-dependent. I like to make the widths of integer types explicit in the names. So, I use U8, U16, U32, rather than char/uchar, word, long / dword.
"I like to make the widths of integer types explicit in the names. So, I use U8, U16, U32, rather than char/uchar, word, long / dword." Absolutely!