(Sorry for my limited English ability and technical ability.)
I have some questions about CRC32. Please kindly guide me.
(www.gelato.unsw.edu.au/.../crc32.c)
To check the CRC, you can either check that the CRC matches the recomputed value, *or* you can check that the remainder computed on the message+CRC is 0. This latter approach is used by a lot of hardware implementations, and is why so many protocols put the end-of-frame flag after the CRC.
Note that a CRC is computed over a string of *bits*, so you have to decide on the endianness of the bits within each byte. To get the best error-detecting properties, this should correspond to the order they're actually sent. For example, standard RS-232 serial is little-endian; the most significant bit (sometimes used for parity) is sent last. And when appending a CRC word to a message, you should do it in the right order, matching the endianness.
1. One another article says that, check if ( CRC(buf + CRC(buf)) == 0 ) is the best/fastest way to do the CRC verification. But I am wondering if this is good for software? I don't see any performance enhancement, because I still need to re-calculate the CRC value for the whole packet/string.
2. The most popular polynomial for CRC32 is 0x4C11DB7, which is designed for IEEE 802.3 Ethernet; So I think it is designed for Big Endian. However, I want to use this on a RS485 Net, and the UART Transmission is based on Little Endian, If someone wants to get the best error-detecting properties, should he/she needs to reverse every Byte? And if needs, is this the only thing he/she has to do?