I posted this on the Apple Developers Forum, we continue the discussion (sorry for the pun). This was in response to who raised a similar concept.
I think that correct me if I am wrong and give us an example, as if you do not agree that creating mpeg ts from raw h264 that you get from AVCaptureVideoDataOutput is not an easy task if you do not transcode using x264 or something similar. let's assume for a moment that you can easily get mpeg ts files, then it would be easy to collect them in the m3u8 container by running a small web server and serving them. As far as I know, and there are many applications that do this, using localhost tunnels from the device is not a failure problem. So, maybe in some way you could generate hls from the device, I ask a question about the performance you get.
So, to technique number 2 Still using AvCaptureVideoDataOutput, you capture frames, wrap them with some neat little protocol, json, or perhaps something more esoteric, like bencode, open the socket and send it to your server. Ahh ... good luck itβs better to have a good reliable network, because sending uncompressed frames even over Wi-Fi will require bandwidth.
So, to the technique number 3.
You write a new movie using avassetwriter and read back from a temporary file using the standard c functions, thatβs fine, but you have raw h264, mp4 is not complete, so it has no moov atoms, now itβs the fun part regenerating this header . good luck.
So, for tecnique 4, which, apparently, has some advantages
We create not one, but 2 avassetwriters, we control them using gcd dispatch_queue, since after creating the avasetters can be used only once, we start the first timer, after the specified period says 10 seconds, we start the second when the first is broken . Now we have a series of .mov files with full moov atoms, each of which contains h264 compressed video. Now we can send them to the server and collect them into one full video stream. As an alternative, we could use a simple streamer that accepts mov files and transfers them to the rtmp protocol using librtmp and sends them to the media server.
Can we just send each individual mov file to another Apple device, thereby receiving a device to communicate with the device, this question has been misinterpreted many times, and searching for another iphone device on the same subnet via Wi-Fi is quite easy and can be performed . Finding another device on a tcp connection via celluar is almost magical if it can be made the only possible on cellular networks that use the ip address, and not all regular media.
Say that you could, then you have an additional problem, because itβs not the video ads that can cross borders that can handle the transition between these different different movie files. You will need to write your own streaming player, possibly based on ffmpeg decoding. (thats works pretty well)