You can always calculate future audio playback time using AVAudioTime. The current behavior is useful because it supports the planning of additional buffers / segments / files for playback from the callback until the end of the current current buffer / segment / file, which avoids a gap in sound reproduction. This allows you to create a simple player without much work. Here is an example:
class Latch { var value : Bool = true } func loopWholeFile(file : AVAudioFile, player : AVAudioPlayerNode) -> Latch { let looping = Latch() let frames = file.length let sampleRate = file.processingFormat.sampleRate var segmentTime : AVAudioFramePosition = 0 var segmentCompletion : AVAudioNodeCompletionHandler! segmentCompletion = { if looping.value { segmentTime += frames player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion) } } player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion) segmentCompletion() player.play() return looping }
In the above code, the entire file is displayed twice before calling player.play (). Since each segment is close to completion, in the future he plans another file as a whole to avoid gaps in playback. To stop the loop, you use the return value, Latch, for example:
let looping = loopWholeFile(file, player) sleep(1000) looping.value = false player.stop()
source share