Ffmpeg separation of RGB and Alpha channels using a filter

I am trying to use ffmpeg to split the input file into two separate files:

I managed to do both, but now I want to combine them into one team. That's what I'm doing:

ffmpeg -r $FPS -y -i input.flv -vcodec libx264 -vpre ipod640 -acodec libfaac -s 256x256 -r $FPS -filter_complex INSERT_FILTER_HERE rgb.mp4 alpha.mp4 

where INSERT_FILTER_HERE:

 format=rgba, split [rgb_in][alpha_in]; [rgb_in] fifo, lutrgb=a=minval [rgb_out]; [alpha_in] format=rgba, split [T1], fifo, lutrgb=r=maxval:g=maxval:b=maxval, [T2] overlay [out]; [T1] fifo, lutrgb=r=minval:g=minval:b=minval [T2] 

In short, I split the file into two streams, for the first stream I β€œdelete” the alpha channel, for the second stream I extract the halftone view of the alpha channel. When I put this through graph2dot, it works fine, with nullsink output as output.

However, when I run it in ffmpeg with -filter_complex, I get:

 ffmpeg version N-41994-g782763e Copyright (c) 2000-2012 the FFmpeg developers built on Jun 28 2012 17:45:15 with gcc 4.6.3 configuration: --enable-gpl --enable-nonfree --enable-pthreads --enable-filters --enable-libfaac --enable-libmp3lame --enable-libx264 --enable-libtheora --enable-libvpx --enable-postproc --enable-avfilter libavutil 51. 63.100 / 51. 63.100 libavcodec 54. 29.101 / 54. 29.101 libavformat 54. 11.100 / 54. 11.100 libavdevice 54. 0.100 / 54. 0.100 libavfilter 3. 0.100 / 3. 0.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, flv, from 'input.flv': Metadata: audiodelay : 0 canSeekToEnd : true Duration: 00:01:10.56, start: 0.000000, bitrate: 1964 kb/s Stream #0:0: Video: vp6a, yuva420p, 800x950, 1536 kb/s, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: mp3, 44100 Hz, stereo, s16, 128 kb/s [graph 0 input from stream 0:0 @ 0x2e4c6e0] w:800 h:950 pixfmt:yuva420p tb:1/30 fr:30/1 sar:0/1 sws_param:flags=2 Output pad "default" for the filter "Parsed_lutrgb_3" of type "lutrgb" not connected to any destination 

Any ideas on how I am doing ffmpeg recognize that it should write [rgb_out] in rgb.mp4 and [out] in alpha.mp3?

Thanks in advance!

+1
ffmpeg video video-processing
source share
1 answer

You need to explicitly map the output from the filters to the output files using -map . From the documentation for -filter_complex :

Output link labels refer to -map . Unmarked outputs are added to the first output file.

For example, to overlay an image on top of a video

 ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]'\ -map '[out]' out.mkv 

So in your case you need something like:

 ffmpeg ... -i input ... -filter_complex 'split [rgb_in][alpha_in]; ... [rgb_out];\ ... [alpha_out]' -map '[rgb_out]' rgb.mp4 -map '[alpha_out]' alpha.mp4 
+1
source share

All Articles