I did not look at the SDL code, but I assumed that the “chunks” are for smaller sound samples and are cached in memory, decoded completely, until the “music” is transmitted (not cached) in its entirety, but decoded and buffered as necessary, based on assumptions that it will for the most part be played from the very beginning and continuously from this point, possibly with some temporary time reset)
So, the reason is in memory. You don’t want to decode, say, 4 minutes of 16-bit stereo recording into memory, since it will consume 44100 Hz * 2 bytes * 2 channels * 4 minutes * 60 sec / min == 42336000 bytes if you try when you can decode and buffer smaller parts.
OTOH, if you have 10 MB of RAM per minute of music to waste time, and you need a processor that will be consumed on the fly ... you could probably use chunks.
source share