I do not use HAL for performance reasons, as it is very awkward and - imo also does not provide much abstraction to justify this. Hardware processing is not much more complicated; especially since you still need to have a good understanding of what is happening. And, as you already discovered, HAL only supports a specific approach; as soon as you follow your own path, you will be lost.
You seem to have similar problems as the overflow flag is set. After such an error, you should re-synchronize the receiver with the transmitter downstream after the error as a whole. This would require out-of-band signaling using a symbol or line condition not occurring in the packet. Crop errors are a good indicator; there are problems with synchronization with the beginning of a character (start bit).
If the line is clean (and not EMC problems), there should be no cropping errors or data distortions (if the synchronization parameters do not match).
If you use simple ping pong, it may take a while. However, the correct solution depends on the protocol. A good protocol design allows for transmission and overflow errors.
Note that you need to enable receive error traps in addition to DMA transmissions for information. However, if you use a timeout (and ping-pong protocol), you can simply erase the flags, as the data did not seem to be reached on time. If in fact the use of error interrupts should also be aware of the racing conditions.
source share