We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
(Sorry for my limited English ability and technical ability.)
I have some questions about CRC32. Please kindly guide me.
(www.gelato.unsw.edu.au/.../crc32.c)
To check the CRC, you can either check that the CRC matches the recomputed value, *or* you can check that the remainder computed on the message+CRC is 0. This latter approach is used by a lot of hardware implementations, and is why so many protocols put the end-of-frame flag after the CRC.
Note that a CRC is computed over a string of *bits*, so you have to decide on the endianness of the bits within each byte. To get the best error-detecting properties, this should correspond to the order they're actually sent. For example, standard RS-232 serial is little-endian; the most significant bit (sometimes used for parity) is sent last. And when appending a CRC word to a message, you should do it in the right order, matching the endianness.
1. One another article says that, check if ( CRC(buf + CRC(buf)) == 0 ) is the best/fastest way to do the CRC verification. But I am wondering if this is good for software? I don't see any performance enhancement, because I still need to re-calculate the CRC value for the whole packet/string.
2. The most popular polynomial for CRC32 is 0x4C11DB7, which is designed for IEEE 802.3 Ethernet; So I think it is designed for Big Endian. However, I want to use this on a RS485 Net, and the UART Transmission is based on Little Endian, If someone wants to get the best error-detecting properties, should he/she needs to reverse every Byte? And if needs, is this the only thing he/she has to do?
If you look at en.wikipedia.org/.../Cyclic_redundancy_check you will see hat the polynomials are normally given in multiple ways.
In your case 0x04C11DB7 or 0xEDB88320 depending on if they are expressed with big or little bit first.
The actual polynomial is x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x2 + x + 1, and is relevant whatever bit order you have. It is only how you implement your CRC routine that will need your polynomial to be expressed in a certain way, and that will require that you either shift in the data bits in a specific order (low-to-high or high-to-low), or have a precomputed lookup table based on that specific bit order.
A reason for caring about the bit order when you compute the CRC is that not all CRC polynomials are chosen with the same criteria. For serial communication, it is likely that a disturbance will affect multiple bits in a row, in which case you want a CRC polynomial optimized for catching as long spans of bit errors as possible.
Computing with the wrong bit order will not make the CRC lousy - just a bit sub-optimal for some error cases.
Have a look at the Boost implementation of CRC. It has a lot of options for selection of polynomial, reflection, conditioning, ... www.boost.org/.../crc.html
Hi Per,
Many thanks for your explanation.
I learnt 0x04C11DB7 has a 0xEDB88320 variant several days ago, and after some studies, I thought it is used for the optimized calculation of "Reflect Data" and "Reflect Remainder".
I got a lot of sample codes, one of them is from the below web-site.
www.lammertbies.nl/.../crc-calculation.html
It uses 0xEDB88320 for the optimized calculation of "Reflect Data" and "Reflect Remainder". (Hope my understanding is not incorrect.)
I didn't think further to use 0xEDB88320 as a Little Endian solution. My mathematics is bad, and the polynomial is really diffcult to understand and handle.
I will study the web page you mentioned. Many Thanks.