This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

warning correction help

openPOWERLINK_v1.1.0\EplStack\EplApiGeneric.c(1648): warning:  #68-D: integer conversion resulted in a change of sign

The parameter is usually assigned a dword value, but when, on initiation of the code, it is assigned a -1, the program should assign the parameter with a default value which is stored in an object dictionary.

if (EplApiInstance_g.m_InitParam.m_dwCycleLen != -1)

m_dwCycleLen is a DWORD. How can I remove the warning?

Parents
  • That depends.

    It does give an indication of being an unsigned data type because of convention - quite a lot of systems and languages have used BYTE and WORD to indicate unsigned data.

    When porting Windows code, you have to use WORD, DWORD, QWORD etc just because a huge amount of code is written with these variable types.

    It could be argued that M$ originally did a poor choice, locking the names to the 16-bit 8086 processor. But then again, I don't think they - or anyone else - would have expected their API to survive 25 years from Windows 1.0.

    When designing new code, it is often an idea to use (u)int8_t, (u)int16_t, (u)int32_t, ... for variables that should have a fixed size. If they are not supported by the compiler, then it is quite easy to create a little helper header with them declared.

    And to support efficiency (on larger processors) it may be a good idea to make use of (u)int_least8_t, (u)int_least16_t, ... and (u)int_fast8_t, (u)int_fast16_t, ... data types when the memory consumption of extra large variables doesn't matter.

Reply
  • That depends.

    It does give an indication of being an unsigned data type because of convention - quite a lot of systems and languages have used BYTE and WORD to indicate unsigned data.

    When porting Windows code, you have to use WORD, DWORD, QWORD etc just because a huge amount of code is written with these variable types.

    It could be argued that M$ originally did a poor choice, locking the names to the 16-bit 8086 processor. But then again, I don't think they - or anyone else - would have expected their API to survive 25 years from Windows 1.0.

    When designing new code, it is often an idea to use (u)int8_t, (u)int16_t, (u)int32_t, ... for variables that should have a fixed size. If they are not supported by the compiler, then it is quite easy to create a little helper header with them declared.

    And to support efficiency (on larger processors) it may be a good idea to make use of (u)int_least8_t, (u)int_least16_t, ... and (u)int_fast8_t, (u)int_fast16_t, ... data types when the memory consumption of extra large variables doesn't matter.

Children
  • "ULONG32BITLOWENDIAN".

    Note that the processor should normally not have any data type specifing big or little endian. Most processors can't even read variables of "wrong" byte order and require that you write code to read the value byte-by-byte and glue the data together again.

    You write your program to always use the native format for the internal variables - in which case you don't need to know what that format is.

    Only when storing binary data to disk, transmitting etc do you need to care about the byte order. Then you normally write a function to convert to/from little-endian or big-endian, depending on what format the file format or link protocol is defined to use.

    Some file formats or protocols are defined to work with either byte order, by containing a magic marker to let the other side auto-detect the used byte order.

  • Before writing your own, check for standard functions like htons and ntohl; eg:

    www.opengroup.org/.../ntohl.html

  • Note that they have a hard-coded "network" byte order.

    If the processor already has the correct byte order, then these functions will be no-op.

    Because of this, it is often better to have own conditionally compiled or auto-detecting functions specifically to go to/from big-endian and little-endian.