We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
openPOWERLINK_v1.1.0\EplStack\EplApiGeneric.c(1648): warning: #68-D: integer conversion resulted in a change of sign
The parameter is usually assigned a dword value, but when, on initiation of the code, it is assigned a -1, the program should assign the parameter with a default value which is stored in an object dictionary.
if (EplApiInstance_g.m_InitParam.m_dwCycleLen != -1)
m_dwCycleLen is a DWORD. How can I remove the warning?
m_dwCycleLen is a DWORD.
And a DWORD is which C/C++ datatype?
Without knowing this bit of information, I would guess that it's an unsigned data type ... and assigning a negative value to a variable with an unsigned data type results in a change of sign (since the value of the variable is positive).
#define DWORD unsigned long int
Well, that pretty much confirms my suspicions.
So, to get rid of the warning, don't try to assign a signed value to a variable of an unsigned type.
How would I be able to remove the warning though? Or should I change the data type?
if (EplApiInstance_g.m_InitParam.m_dwCycleLen != (DWORD)-1)
Ok great, thank you for your help!!
Ok great, thank you for your help!!<p>
It's sometimes considered bad style to have variables that "usually" contain a value but "occasionally" have "special" values that are not used as the value.
One way to avoid this would be to have a second variable that explicitly indicates how to initialize the first.
If you want to keep the "special meaning" construction, use a positive value as this "special" meaning, _and_ use a define for this value to get rid of the "magic number". (e.g. #define INIT_VAR_FROM_ROM 0xFFFFFFFF)
Why using the preprocessor and do
instead of using the compiler:
typedef unsigned long DWORD;
Never use the preprocessor do to tasks the language has direct support for.
"DWORD" is a very poor choice of name - precisely because it gives no explicit indication of either the size or the signed-ness of the type!
I guess you didn't choose it yourself but, whenever you do get to make that choice, be sure to choose something that makes both the size and the signed-ness explicitly clear.
Since C99 has defined a set of such names, you might as well use them...
eg, see:
http://www.keil.com/forum/docs/thread2472.asp
http://www.keil.com/forum/docs/thread5112.asp
www.opengroup.org/.../stdint.h.html
you is being goood and right.
You must be haiving to always us a name to say what is being the size and is being signedness is and being the the endianness.
typedef unsinged long ULONG32BITLOWENDIAN;
That depends.
It does give an indication of being an unsigned data type because of convention - quite a lot of systems and languages have used BYTE and WORD to indicate unsigned data.
When porting Windows code, you have to use WORD, DWORD, QWORD etc just because a huge amount of code is written with these variable types.
It could be argued that M$ originally did a poor choice, locking the names to the 16-bit 8086 processor. But then again, I don't think they - or anyone else - would have expected their API to survive 25 years from Windows 1.0.
When designing new code, it is often an idea to use (u)int8_t, (u)int16_t, (u)int32_t, ... for variables that should have a fixed size. If they are not supported by the compiler, then it is quite easy to create a little helper header with them declared.
And to support efficiency (on larger processors) it may be a good idea to make use of (u)int_least8_t, (u)int_least16_t, ... and (u)int_fast8_t, (u)int_fast16_t, ... data types when the memory consumption of extra large variables doesn't matter.
"ULONG32BITLOWENDIAN".
Note that the processor should normally not have any data type specifing big or little endian. Most processors can't even read variables of "wrong" byte order and require that you write code to read the value byte-by-byte and glue the data together again.
You write your program to always use the native format for the internal variables - in which case you don't need to know what that format is.
Only when storing binary data to disk, transmitting etc do you need to care about the byte order. Then you normally write a function to convert to/from little-endian or big-endian, depending on what format the file format or link protocol is defined to use.
Some file formats or protocols are defined to work with either byte order, by containing a magic marker to let the other side auto-detect the used byte order.
Before writing your own, check for standard functions like htons and ntohl; eg:
www.opengroup.org/.../ntohl.html
Note that they have a hard-coded "network" byte order.
If the processor already has the correct byte order, then these functions will be no-op.
Because of this, it is often better to have own conditionally compiled or auto-detecting functions specifically to go to/from big-endian and little-endian.