HTTP Streaming Server on iPhone

I am trying to run an HTTP streaming server on an iPhone that captures a video stream from a camera and transfers it to an HTML5 client (which supports HTTP Live Streaming).

So far I have been working.

  • An HTTP streaming server on iOS (written in Node.js) that dynamically updates the index file from the list of Transport Stream files (video / MP2T) created by the video capture module.
  • A video capture module that uses AVCaptureMovieFileOutput to create a series of 10 second QuickTime files continuously (there is a small gap between them, but it is small enough for my application).

I need a converter on the fly that converts each QuickTime file to a transport stream file (there is no need to change the encoding, I only need another container), which connects the two modules above.

I use this approach because this is the only way I can use the iPhone hardware video encoder as far as I know (I have done quite a lot of research on this topic here, and I'm 99% sure. Please let me know if I'm wrong).

A few people suggested ffmpeg, but I would prefer to use a much smaller code with a MIT license (if any) or write something from scratch (and open source with a MIT license).

I am completely new to this media container, and I would really appreciate it if someone could point me in the right direction (sample code, open source, documents, ...).

+8
ios avfoundation streaming
source share
1 answer

I posted this on the Apple Developers Forum, we continue the discussion (sorry for the pun). This was in response to who raised a similar concept.

I think that correct me if I am wrong and give us an example, as if you do not agree that creating mpeg ts from raw h264 that you get from AVCaptureVideoDataOutput is not an easy task if you do not transcode using x264 or something similar. let's assume for a moment that you can easily get mpeg ts files, then it would be easy to collect them in the m3u8 container by running a small web server and serving them. As far as I know, and there are many applications that do this, using localhost tunnels from the device is not a failure problem. So, maybe in some way you could generate hls from the device, I ask a question about the performance you get.

So, to technique number 2 Still using AvCaptureVideoDataOutput, you capture frames, wrap them with some neat little protocol, json, or perhaps something more esoteric, like bencode, open the socket and send it to your server. Ahh ... good luck it’s better to have a good reliable network, because sending uncompressed frames even over Wi-Fi will require bandwidth.

So, to the technique number 3.

You write a new movie using avassetwriter and read back from a temporary file using the standard c functions, that’s fine, but you have raw h264, mp4 is not complete, so it has no moov atoms, now it’s the fun part regenerating this header . good luck.

So, for tecnique 4, which, apparently, has some advantages

We create not one, but 2 avassetwriters, we control them using gcd dispatch_queue, since after creating the avasetters can be used only once, we start the first timer, after the specified period says 10 seconds, we start the second when the first is broken . Now we have a series of .mov files with full moov atoms, each of which contains h264 compressed video. Now we can send them to the server and collect them into one full video stream. As an alternative, we could use a simple streamer that accepts mov files and transfers them to the rtmp protocol using librtmp and sends them to the media server.

Can we just send each individual mov file to another Apple device, thereby receiving a device to communicate with the device, this question has been misinterpreted many times, and searching for another iphone device on the same subnet via Wi-Fi is quite easy and can be performed . Finding another device on a tcp connection via celluar is almost magical if it can be made the only possible on cellular networks that use the ip address, and not all regular media.

Say that you could, then you have an additional problem, because it’s not the video ads that can cross borders that can handle the transition between these different different movie files. You will need to write your own streaming player, possibly based on ffmpeg decoding. (thats works pretty well)

+6
source

All Articles