I have tried variations using sizeof() with unreliable results. This method works, but is it bogus? Can it be improved? I am not worried about it being portable.
// -- Unit Variables -- struct { // Complex Structure // Lots of nested arrays, // integer values, etc. } message; char replyBuffer[20]; // Input Buffer // Return size of message structure unsigned int getMessageSize(void) { int i, *p1, *p2; p1 = (int *)&message; // Create pointer to Message Struct p2 = (int *)&replyBuffer; // Create pointer to replyBuffer i = p2-p1; // Calculate message structure size return(i); // Does this really work? }
You are assuming that 'message' is always located before 'replyBuffer', this is very dangerous. It will definitely not work if message and replyBuffer are located in different memory areas (near vs. far/huge etc.):
typedef struct message { // Complex Structure // Lots of nested arrays, // integer values, etc. } MESSAGE; MESSAGE msg; char replyBuffer[20]; // Input Buffer // Return size of message structure unsigned int getMessageSize(void) { return (sizeof (msg)); }
#pragma pack(1) // or: bytealign typedef struct message { // Complex Structure // Lots of nested arrays, // integer values, etc. } MESSAGE; #pragma pack() // restore default packing (pack(2))
I have tried variations using sizeof() with unreliable results. You'll want to go into more detail on that one. What made you qualify the results of sizeof() as "unreliable"? The method you showed is quite certainly a lot less reliable than just using sizeof().
Thank you all for the quick response. The sizeof results were larger than the actual size. When I used the sizeof results to load new data into the message structure, it would wipe out the replyBuffer. I am not real keen on using pack(1) because I do not want to slow down the functions that access the message structure. How is this 'very dangerous' and 'a lot less reliable' when it seems to be working? After the build the pointers are not going to change, are they? I am not arguing, just curious. Since the consensus seems to be that my method is bogus, I will experiment some more with sizeof.
Sizeof() is a compile-time constant. If sizeof() has a bug, then the compiler doesn't know big your stuctures are, which means that it's highly likely that the linker won't allocate space for them properly. In short, your program most likely wouldn't work correctly. Why does
unsigned int getMessageSize () { return sizeof(message); }
typedef struct { // complicated declarations } Message; struct { Message m; U8 justBeyondM; } SizeofMessage;
&(((SizeofMessage*)0)->justBeyondM)
The sizeof results were larger than the actual size. When I used the sizeof results to load new data into the message structure, it would wipe out the replyBuffer. That means either you "used the sizeof results" in some incorrect way, or you found an enormous compiler bug. You'll have to forgive people if they think the former is a good deal more likely. Whichever it is, the next step is the same: you really should show an example case that actually exhibits the problem.
I tested this and found that sizeof returned a value that was exactly twice the value of the calculated sum.
i = sizeof message; // i = 0x63C6 i = p2-p1; // i = 0x31E3 (using integer pointers)
sizeof returned a value that was exactly twice the value of the calculated sum. ... So in a nutshell, you asked sizeof() a different question than the one you wanted the answer of, sizeof() gave the correct answer, and you ended up blaming the messenger. Good we were able to clear that up.
"I was thinking of bytes as 16 bits." False premises do tend to lead to incorrect conclusions... ;-)
Technically, sizeof returns the size of a type in multiples of the sizeof(char). Note that while sizeof(char) == 1 (by definition), it's not necessarily true that sizeof(char) is one 8-bit byte. That is, however, almost always true with typical platforms these days, so people tend to forget that a char might not be stored in one byte, thus providing glorious opportunities for pedantry. The difference in int pointers gives you the difference in multiples of sizeof(int). Recall that pointer arithmetic does not operate in units of bytes, but in sizeof(type pointed to). If you want to do byte arithmetic (sizeof(char) arithmetic), you need to cast to an integer. Thirty or forty years ago, the size of bytes used to vary. (That's one reason the IEEE likes to use the word "octet".) But I haven't heard anyone debate the point in a long time. Bytes are always 8 bits among the people I talk to. People I know use the word "word" to describe longer sequences of bits. Some people like to insist that a "word" must be exactly 16 bits (and thus use terms like "dword" for 32 bits); others think of it as the native width of the data bus, which makes the meaning context-dependent. I like to make the widths of integer types explicit in the names. So, I use U8, U16, U32, rather than char/uchar, word, long / dword.
"I like to make the widths of integer types explicit in the names. So, I use U8, U16, U32, rather than char/uchar, word, long / dword." Absolutely!