How are SYNC words chosen?

I use a data system that uses the SYNC fixed word (0xD21DB8) at the beginning of each superframe. I would be interested to know how such SYNC words are selected, i.e. Based on which criteria designers choose the length and meaning of such a SYNC word.

+7
embedded communication
source share
2 answers

In short:

  • high probability of uniqueness

  • high density transitions

It depends on the underlying “server level” (in terms of communication). If the server layer mentioned does not provide a means to extract payload data from control signals, then a protocol should be developed. For a bitstream-oriented synchronous transport layer, the SYNC pattern is usually used to determine the payload units. A good example of this method is the SONET / SDH / OTN technology, the main technology of optical transport communications.

Usually the main criterion for choosing the word SYNC is a high probability of uniqueness. Of course, what makes it a unique property depends on the encoding used for the payload.

Example: in SONET / SDH, after the word SYNC was found, it is checked for several superframes (I don’t remember exactly many) before declaring the correct synchronization state. This is necessary because of false positives: encoding in a synchronous bitstream cannot be guaranteed to generate encoded payload patterns orthogonal to the SYNC word.

There is another criterion: high density transitions. Sometimes the server layer consists of synchronization signals and data (i.e., not separated). In this case, in order for the receiver to draw symbols from the stream, it is extremely important to ensure the maximum number of 0-> 1, 0-> 1 transitions in another in order to extract the clock signal.

Hope this helps.

Updated : these presentations may also be of interest.

+11
source share

At the physical level, another consideration (besides those mentioned in jldupont's answer) is that the sync word can be used to synchronize the clock of the receiver’s synchronization with the sender’s synchronizer. Synchronization may only require zeroing the receiver clock, but may also require changing the clock frequency to better match the sender.

For a typical asynchronous protocol, the sender and receiver need the clock to be the same. In fact, of course, the clock never exactly matches, so the maximum error is usually indicated.

Some protocols do not require the receiver to tune its clock frequency, but make a mistake by oversampling or some other method. For example, a typical UART is capable of handling errors by zeroing out on the first edge of the start bit, and then receive multiple samples at the point where it expects the middle of each bit. In this case, the synchronization word is only a start bit and provides a transition at the beginning of the message.

In the HART industry protocol, the synchronization word is 0xFF plus a parity bit repeated several times. This is represented as an analog signal encoded using FSK and appears as 8 periods (equal to 8 bits of time) of a sine wave of 1200 Hz, and then a one-bit time at 2200 Hz. This pattern allows the receiver to detect that there is a valid signal, and then synchronize with the beginning of the byte by detecting a transition from 2200 Hz to 1200 Hz. If necessary, the receiver can also use this signal to adjust its clock.

+6
source share

All Articles