At the physical level, another consideration (besides those mentioned in jldupont's answer) is that the sync word can be used to synchronize the clock of the receiver’s synchronization with the sender’s synchronizer. Synchronization may only require zeroing the receiver clock, but may also require changing the clock frequency to better match the sender.
For a typical asynchronous protocol, the sender and receiver need the clock to be the same. In fact, of course, the clock never exactly matches, so the maximum error is usually indicated.
Some protocols do not require the receiver to tune its clock frequency, but make a mistake by oversampling or some other method. For example, a typical UART is capable of handling errors by zeroing out on the first edge of the start bit, and then receive multiple samples at the point where it expects the middle of each bit. In this case, the synchronization word is only a start bit and provides a transition at the beginning of the message.
In the HART industry protocol, the synchronization word is 0xFF plus a parity bit repeated several times. This is represented as an analog signal encoded using FSK and appears as 8 periods (equal to 8 bits of time) of a sine wave of 1200 Hz, and then a one-bit time at 2200 Hz. This pattern allows the receiver to detect that there is a valid signal, and then synchronize with the beginning of the byte by detecting a transition from 2200 Hz to 1200 Hz. If necessary, the receiver can also use this signal to adjust its clock.
Steve melnikoff
source share