6

I am a bit confused on the difference between Cyclic Redundancy Check and Hamming Code. Both add a check value attached based on the some arithmetic operation of the bits in the message being transmitted. For Hamming, it can be either odd or even parity bits added at the end of a message and for CRC, it's the remainder of a polynomial division of their contents.

However, CRC and Hamming are referred to as fundamentally different ideas. Can someone please elaborate on why this is?

Also why is the CRC compared with the FCS(Frame check sequence) to see if the received message is with or without error? Why not just use the FCS from the beginning? (I might be totally flawed in my understanding by asking this question, please correct me.)

3 Answers3

2

Both CRC and the Hamming code are binary linear codes. One significant difference is that the Hamming code only works on data of some fixed size (depending on the Hamming code used), whereas CRC is a convolutional code which works for data of any size.

So, are CRC and the Hamming code fundamentally different ideas? This is a philosophical rather than a technical question, and so has no definite answer. It could also depend on your point of view. From far away, both are binary linear codes. From up close, there are some crucial differences.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514
2
  • CRC is conceived as the remainder of a polynomial division. It is efficient for detecting errors, when the calculated remainder does not match. Depending on the CRC size, it can detect bursts of errors (10 bits zeroed, for example), which is great for checking communications.

The "FCS" term is used sometimes for some transformed version of the CRC (Ethernet for example) : The purpose is to apply the CRC algorithm to both the data and its FCS to cancel the remainder value and get a constant (just like even parity is ensuring an even number of "1" bits, including the parity bit).

  • Hamming codes are both detection and correction codes. Adding the Hamming code bits, you ensure some distance (as the number of differing bits) between valid codes. For example, with 3 bits distance, you can correct any 1 bit error OR detect any 2 bit error.

Reduced to a single check bit, Hamming codes and CRC are identical (x+1 polynomial) to parity.

Grabul
  • 1,900
  • 10
  • 12
0

One big difference is hamming codes can only correct 1-bit errors and detect 2-bit errors with an additional parity bit. CRCs on the other hand can detect an arbitary number of bit errors with high reliability even with a relatively short code.

Hamming codes require a minimum number of parity bits based on the data length. CRCs can have any arbitary number of bits regardless of the data length.

An 8-bit CRC can't guarantee the detection of more than 8-bits of error. However, it can detect more than 8-bits of error with a statistical probablity of 255/256. i.e. there is only a 0.4% chance that an 8-bit CRC will fail to detect any arbitrary number of errors greater than 8-bits.

A 32-bit CRC is even better, it can detect up to 32-bits of error, but for more than 32-bits of error there is only a 1 in 4 billion chance that it will fail to detect any number of bit errors.

Hamming codes on the other hand can only deal with 1-bit correction and 2-bit detection (with additional parity bit).

Because error correction requires not just detecting an error but identifying the exact bit that is wrong, longer code words are requried.

Side note, hamming codes with an additional parity bit can detect 3-bit errors when not using error correction, but hamming codes are pointless if your not using the error correction.