I am trying to understand the appropriateness of a direct streaming solution. I want to capture WebRTC streams (audio and video), send them to the server and transform them into pieces for sending to the html5 tag or DASH player using the WebM container (VP8 and Opus codecs).
I also studied ffmpeg, ffserver and gstreamer, but ...
My question is how to feed WebRTC streams (in real time) and convert them to HTTP fragments (real-time compatible with DASH)?
Has anyone achieved something similar?
source share