How to write opencv mat for gstreamer pipeline?

I want to add some opencv processes to the gstreamer pipeline and then send it via udpsink.

I can read frames from gstreamer as follows:

// may add some plugins to the pipeline later cv::VideoCapture cap("v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! appsink"); cv::Mat frame; while(ture){ cap >> frame; // do some processing to the frame } 

But I can’t understand how to transfer the processed frame to the following pipeline: appsrc ! x264enc ! mpegtsmux ! udpsink host=localhost port=5000 appsrc ! x264enc ! mpegtsmux ! udpsink host=localhost port=5000

I tried

 cv::VideoWriter writer = cv::VideoWriter("appsrc ! x264enc ! mpegtsmux ! udpsink host=localhost port=5000", 0, (double)30, cv::Size(640, 480), true); writer << processedFrame; 

However, the recipient side does not receive anything. (I am using the $gst-launch-1.0 udpsrc port=5000 ! tsparse ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! ximagesink sync=false as the receiver)

My question is: can I transfer the processed opencv Mat to the gstreamer pipeline and let it do some encoding and then send it over the network via udpsink? If so, how do I achieve this?

Side question, is there a way to debug VideoWriter? For example, checking whether frames are actually written to it.

Please note that I am using opencv 2.4.12 and gstreamer 1.2 on ubuntu 14.04.

Any help is great!

EDIT: To provide more information, I checked the following code and gave GStreamer Plugin: Embedded video playback halted; module appsrc0 reported: Internal data flow error. GStreamer Plugin: Embedded video playback halted; module appsrc0 reported: Internal data flow error.

 #include <stdio.h> #include <opencv2/highgui/highgui.hpp> #include <opencv2/opencv.hpp> int main(int argc, char *argv[]){ cv::VideoCapture cap("v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! appsink"); if (!cap.isOpened()) { printf("=ERR= can't create capture\n"); return -1; } cv::VideoWriter writer; // problem here writer.open("appsrc ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! autovideoconvert ! ximagesink sync=false", 0, (double)30, cv::Size(640, 480), true); if (!writer.isOpened()) { printf("=ERR= can't create writer\n"); return -1; } cv::Mat frame; int key; while (true) { cap >> frame; if (frame.empty()) { printf("no frame\n"); break; } writer << frame; key = cv::waitKey( 30 ); } cv::destroyWindow( "video" ); } 

There seems to be something wrong with the appsrc pipeline, but I have no idea what went wrong, because the pipeline is gst-launch-1.0 v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! ximagesink sync=false gst-launch-1.0 v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! ximagesink sync=false gst-launch-1.0 v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! ximagesink sync=false works fine.

+5
source share
2 answers

After hours of searching and testing, I finally got an answer. The key is to use only videoconvert after appsrc , there is no need to install caps. Consequently, the writer pipeline will look like appsrc ! videoconvert ! x264enc ! mpegtsmux ! udpsink host=localhost port=5000 appsrc ! videoconvert ! x264enc ! mpegtsmux ! udpsink host=localhost port=5000 appsrc ! videoconvert ! x264enc ! mpegtsmux ! udpsink host=localhost port=5000 .

Below is an example of code that reads images from the gstreamer pipeline, does some opencv image processing, and writes it back to the pipeline.

With this method, you can easily add any opencv process to the gstreamer pipeline.

 // Compile with: $ g++ opencv_gst.cpp -o opencv_gst `pkg-config --cflags --libs opencv` #include <stdio.h> #include <opencv2/highgui/highgui.hpp> #include <opencv2/opencv.hpp> int main(int argc, char** argv) { // Original gstreamer pipeline: // == Sender == // gst-launch-1.0 v4l2src // ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB // ! videoconvert // ! x264enc noise-reduction=10000 tune=zerolatency byte-stream=true threads=4 // ! mpegtsmux // ! udpsink host=localhost port=5000 // // == Receiver == // gst-launch-1.0 -ve udpsrc port=5000 // ! tsparse ! tsdemux // ! h264parse ! avdec_h264 // ! videoconvert // ! ximagesink sync=false // first part of sender pipeline cv::VideoCapture cap("v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! appsink"); if (!cap.isOpened()) { printf("=ERR= can't create video capture\n"); return -1; } // second part of sender pipeline cv::VideoWriter writer; writer.open("appsrc ! videoconvert ! x264enc noise-reduction=10000 tune=zerolatency byte-stream=true threads=4 ! mpegtsmux ! udpsink host=localhost port=9999" , 0, (double)30, cv::Size(640, 480), true); if (!writer.isOpened()) { printf("=ERR= can't create video writer\n"); return -1; } cv::Mat frame; int key; while (true) { cap >> frame; if (frame.empty()) break; /* Process the frame here */ writer << frame; key = cv::waitKey( 30 ); } } 

Hope this helps;)

+10
source

Ok, this is long for comments .. its not an answer, but a few tips:

1a, use netcast to check what is received on the receiver side. Its simple:

 shell> nc -l 5000 -u 

Then check what is printed when you send something to the receiver. nc is configured to reset everything to the screen.

1b, you can try vlc for the receiver and check debugging messages (located in "Tools"> "Messages" - or press Ctrl + M). Set the log lever to 2 debugs. Then open the network stream and use udp://@:5000 as the URL.

Btw you can test it with rtp with pipe:

 appsrc ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=localhost port=5000 

What is in vlc rtp://@:5000 , then ..

2, check what happens after appsrc using the authentication element (very useful .. I often use it to debug pipe problems):

change your handset (note the identity element and -v to udpsink):

 cv::VideoWriter writer = cv::VideoWriter("appsrc ! identity silent=false ! x264enc ! mpegtsmux ! udpsink -v host=localhost port=5000", 0, (double)30, cv::Size(640, 480), true); 

Then run your code and check its output .. it should list the incoming buffers from appsrc

3, To the code that you posted as an update - I did not want to use the caps= attribute for caps, but maybe there is no difference:

  writer.open("appsrc caps="video/x-raw, framerate=30/1, width=640, height=480, format=RGB" ! autovideoconvert ! ximagesink sync=false", 0, (double)30, cv::Size(640, 480), true); 
+2
source

All Articles