To understand why synchronization-compatible integers are used, itβs useful to understand a little about the MP3 data format and how the MP3 file is played by the media player. MP3 data is stored in a file as a series of frames. Each frame contains a small bit of digital music encoded in MP3 format, as well as some metadata about the frame itself. At the beginning of each MP3 frame, 11 bits (sometimes 12) are all set to 1. This is called synchronization, and this is the template that the media player looks for when trying to play a file or MP3 stream. If a player finds this 11-bit sequence, then he knows that he has found an MP3 frame that can be decoded and played.
See: www.id3.org/mp3Frame
As you know, the ID3 tag contains data about the track as a whole. The ID3 tag - in version 2.x and later - is located at the beginning of the file or can be embedded in the MP3 stream (although this is not often done). The ID3 tag header contains a 32-bit field that indicates how many bytes are in the tag. The maximum value of an unsigned 32-bit integer may contain 0xFFFFFFFF. Therefore, if we write 0xFFFFFFFF in the size field, we pretend to be a really big tag (pragmatically too big). When the player tries to play a file or stream, it searches for the 11-bit sequence of the MP3 data frame, but instead finds the size field in the ID3 tag header and tries to play the tag because the size field has the first 11 bits. This usually doesn't sound so good, depending on your musical tastes. The solution is to create an integer format that does not contain 11 bit sequences of all 1. Therefore, an integer format with synchronization.
A suitable synchronized integer can be converted to an integer in C / C ++ using something like the following:
int ID3_sync_safe_to_int( uint8_t* sync_safe ) { uint32_t byte0 = sync_safe[0]; uint32_t byte1 = sync_safe[1]; uint32_t byte2 = sync_safe[2]; uint32_t byte3 = sync_safe[3]; return byte0 << 21 | byte1 << 14 | byte2 << 7 | byte3; }
Hope this helps.
J. Andrew Laughlin
source share