Error detection efficiency (CRC, checksum, etc.)

I have a hypothetical situation with sending data units, each of which is a thousand bytes. A failure rate is rare, but if an error occurs, it is less likely that it will be a single bit error and most likely will be an error in several bits in a row.

At first I thought about using a checksum, but apparently this might miss bit errors larger than one bit. Parity will not work, so CRC may be the best option.

Does it use cyclic redundancy check per thousand bytes? Or are there other methods that will work better?

+5
source share
4 answers

Cyclic redundancy checks (CRCs) are especially popular because of their effectiveness in detecting multiple bit errors with guaranteed accuracy.

There are various designs for generating CRC polynomials, where the tradeoff is accuracy and computational complexity. In your case, you can choose the "fastest" that meets your requirements for accuracy.

You might want to start with this Wikipedia article on the Cyclic Redundancy Check .

+7
source

CRC is dealt with in another question here.
When is CRC more suitable for use than MD5 / SHA1?
It is suitable for detecting random errors and is easy to implement.

+2
source

CRC. , "", , CRC (, Ethernet). "" ( ).

+1

? , 512 . CRC - ECC.

CRC . . CRC , " ".

+1

All Articles