Streaming video frames from the server with ffmpeg

I am sending video frames as JPEG images to my server in the RMTP stream. On the server side, I want to connect the stream (ip + port) to ffmpeg so that it can capture images from the stream and create a video stream from them.

In fact, my server is listening on the IP and port for incoming frames. This part is completed. Where I am stuck is how to convert these frames to a video stream using ffmpeg. Can someone please tell me how to do this? I know that image2pipe is what I should go with, but I did not find its syntax and documentation on Google.

+7
ffmpeg video-streaming
source share
2 answers

On the server side, you can call the cmd-line ffmpeg line application to perform a heavy lift for you, and the data stream .

On the client side, there are two approaches that I want to discuss:

Approach 1 : Use ffplay on iPhone to connect to the server and display the video stream.

Advantages : this is the easiest and fastest solution!

FFmpeg was ported to the iPhone by some people a long time ago, so you can just call ffplay from the iPhone, tell it to connect to the server and complete the task! Check out RTP Audio Streaming Stream from FFmpeg for a quick example.

Cons : it seems that there are legal issues involved on this issue , so some people do not recommend distributing your application using FFmpeg.

Approach 2 : write an application like ffplay for iPhone.

Pros . You can set up custom encoding / decoding procedures to protect the transmitted data and force everyone else in the world to use the player to view the stream.

If you download real JPEG (with headers and all, just like a regular JPEG file), the first thing you need is to use a network library so that your application can connect to the server and retrieve data.

I suggest that for each new connection your server receives, it sends a custom header (several bytes) informing the client about the size of each frame being sent (so that the client knows the size of each frame).

After that, the application will have to use another library to interpret the incoming data as a JPEG frame / file. I can think of OpenCV right now, but I'm sure you can find smaller libraries. Perhaps iOS offers a framework for this, but I really don't know.

As soon as your application gets access to all the useful information that each frame carries (i.e. image size and pixels), your application will create a UIImage with this information for each frame that comes from the network to display them on the screen .

Cons : You will need to create an application from scratch and perhaps learn some new APIs along the way.

+2
source share

There are actually 2 methods:

 ffmpeg -i "rtmp://localhost/etc" out.flv 

or

 rtmpdump -v -r rtmp://localhost/etc | ffmpeg -i - out.flv 
+1
source share

All Articles