After implementing a solution for encoding video (with sound) in this issue Encoding video using AVAssetWriter - CRASHES , I found that the code works correctly in iPhone Simulator, Unfortunately, some videos cannot encode their sound while working on a real iPhone 5 ( and other devices).
For example, videos created from WWy 2011 RosyWriter sample code ( https://developer.apple.com/library/IOS/samplecode/RosyWriter/Introduction/Intro.html ) are not fully encoded because the function -[AVAssetReaderOutput copyNextSampleBuffer] never not returning.
Video buffers arrive correctly, but as soon as it tries to copy the first audio file CMSampleBufferRef, the call hangs. When I try to do this on videos that come from other sources, such as those recorded in the iOS camera app, the sound is imported correctly.
This thread, https://groups.google.com/forum/#!topic/coreaudio-api/F4cqCu99nUI , notes the copyNextSampleBuffer function, which copyNextSampleBuffer when used with AudioQueues, and suggests saving operations on a single thread. I tried to keep everything in a separate thread, in the main thread, but no luck.
Has anyone else experienced this and got a possible solution?
EDIT: It seems that the videos created from RosyWriter change their tracks relative to the video from the cameraโs own application, that is, the audio stream as stream 0, and the video stream as stream 1.
Stream
Not sure if this matters to AVAssetReader.
ios avfoundation avasset avassetreader
jlw
source share