Transcoding fMP4 to HLS when recording on iOS using FFmpeg

TL; DR

I want to convert fMP4 fragments to TS segments (for HLS) as fragments are recorded using FFmpeg on an iOS device.

Why?

I am trying to achieve live downloads on iOS, while maintaining a seamless copy of HD on the local computer.

What i tried

  • Hint AVAssetWriter , where each recording is done for 8 seconds, then combines MP4 together through FFmpeg.

    Something went wrong. . Sounds and videos appear from time to time. I have identified 3 reasons for this.

    1) Attaching frames for audio recorded by the AAC gap encoder.

    2) Since video frames are 33.33 ms long and sound frames 0.022 ms long, it is possible that they are not aligned at the end of the file.

    3) The lack of accurate frame encoding is present in Mac OS, but not available for iOS. Details here

  • FFmpeg multiplexes a large MP4 video file with raw sound into TS segments. The work was based on the Kickflip SDK

    Something went wrong. . From time to time, only an audio file without video will be downloaded. I could never play it in the house, but it was very unpleasant for our users when they did not record what they thought they were doing. There were also problems with accurately finding end segments, much like TS segments were incorrectly timestamped.

What i think now

Apple pushed fMP4 at WWDC (2016) this year, and before that I hadn't thought about that. Since the fMP4 file can be read and played back while it is being written, I thought that FFmpeg could transcode the file in the same way that it is written if we do not send bytes to FFmpeg until each fragment inside the file is complete.

However, I am not familiar with the FFmpeg C API, I only used it in attempt # 2.

What i need from you

  • Is this an acceptable solution? Is anyone familiar enough with fMP4 to know if I can actually accomplish this?
  • How do I know if AVFoundation finished writing a fragment inside a file so that I can transfer it to FFmpeg?
  • How can I take data from a file on disk, a piece at a time, transfer it to FFmpeg and pop TS segments?
+55
ios ffmpeg avfoundation hls
Jul 06 '16 at 15:00
source share
1 answer

Strictly speaking, you do not need to transcode fmp4, if it contains h264 + aac, you just need to repack the data samples as TS. (using ffmpeg -codec copy or gpac )

Wrt. alignment (1.2) I suppose it all depends on the settings of your encoder (frame rate, sample rate, and GOP size). Of course, you can make sure that the audio and video are aligned exactly at the borders of the fragment (see for example: this table ). If you focus on iOS, I would recommend using the HLS protocol version 3 (or 4), which allows you to more accurately display the time. It also allows you to transfer audio and video separately (not multiplexed).

I believe that ffmpeg should be able to push the fmp4 live stream (i.e. using a long HTTP POST), but it requires the source software to do something meaningful with it (i.e. the stream in HLS).

+2
Jul 12 '17 at 9:33 on
source share



All Articles