Problem recording video

I am trying to record a video in the resolution of 480 * 480, as in a vine, using javacv. As a starting point, I used the sample provided at https://github.com/bytedeco/javacv/blob/master/samples/RecordActivity.java The video is recorded (but not in the desired resolution) and saved.

But the problem is that the resolution of 480 * 480 is not supported initially in android. Therefore, pre-processing is necessary to obtain the video in the desired resolution.

So, as soon as I was able to record the video using the sample code provided by javacv, the next task was how to pre-process the video. The study found that effective cropping is possible when the desired final image width is the same as the width of the recorded image. Such a solution was provided in the SO question, Recording video on Android using JavaCV (Updated 2014 02 17) . I changed the PreviewFrame method as suggested in this answer.

@Override public void onPreviewFrame(byte[] data, Camera camera) { if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) { startTime = System.currentTimeMillis(); return; } if (RECORD_LENGTH > 0) { int i = imagesIndex++ % images.length; yuvImage = images[i]; timestamps[i] = 1000 * (System.currentTimeMillis() - startTime); } /* get video data */ imageWidth = 640; imageHeight = 480 int finalImageHeight = 360; if (yuvImage != null && recording) { ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0); // resets the buffer final int startY = imageWidth*(imageHeight-finalImageHeight)/2; final int lenY = imageWidth*finalImageHeight; bb.put(data, startY, lenY); final int startVU = imageWidth*imageHeight + imageWidth*(imageHeight-finalImageHeight)/4; final int lenVU = imageWidth* finalImageHeight/2; bb.put(data, startVU, lenVU); try { long t = 1000 * (System.currentTimeMillis() - startTime); if (t > recorder.getTimestamp()) { recorder.setTimestamp(t); } recorder.record(yuvImage); } catch (FFmpegFrameRecorder.Exception e) { Log.e(LOG_TAG, "problem with recorder():", e); } } } } 

Also note that this solution was provided for an older version of javacv. The video resulted in a yellowish overlay, covering 2/3 of the part. Also on the left side was an empty section, because the video was not properly cut.

So my question is what is the most suitable solution for trimming videos using the latest javacv?

Code after changes proposed by Alex Kon

  @Override public void onPreviewFrame(byte[] data, Camera camera) { if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) { startTime = System.currentTimeMillis(); return; } if (RECORD_LENGTH > 0) { int i = imagesIndex++ % images.length; yuvImage = images[i]; timestamps[i] = 1000 * (System.currentTimeMillis() - startTime); } /* get video data */ imageWidth = 640; imageHeight = 480; destWidth = 480; if (yuvImage != null && recording) { ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0); // resets the buffer int start = 2*((imageWidth-destWidth)/4); // this must be even for (int row=0; row<imageHeight*3/2; row++) { bb.put(data, start, destWidth); start += imageWidth; } try { long t = 1000 * (System.currentTimeMillis() - startTime); if (t > recorder.getTimestamp()) { recorder.setTimestamp(t); } recorder.record(yuvImage); } catch (FFmpegFrameRecorder.Exception e) { Log.e(LOG_TAG, "problem with recorder():", e); } } } 

Screenshot of video generated using this code (destWidth 480),

video resolution 480 * 480

Next, I tried to capture a video with destWidth specified as 639. The result is

639 * 480

When destWidth is 639, the video is repeated twice. When it is 480, the content is repeated 5 times, and the green overlay and distortion are greater.

Also, when destWidth = imageWidth, the video is recorded correctly. those. for 640 * 480 there is no repetition of the contents of the video and the green overlay.

Convert Frame to IplImage

When this question was first asked, I missed the mention that the write method in FFmpegFrameRecorder now accepts an object of type Frame, whereas before it was an IplImage object. So I tried applying Alex Cohn's solution by converting Frame to IplImage.

 //--------------------------------------- // initialize ffmpeg_recorder //--------------------------------------- private void initRecorder() { Log.w(LOG_TAG,"init recorder"); imageWidth = 640; imageHeight = 480; if (RECORD_LENGTH > 0) { imagesIndex = 0; images = new Frame[RECORD_LENGTH * frameRate]; timestamps = new long[images.length]; for (int i = 0; i < images.length; i++) { images[i] = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2); timestamps[i] = -1; } } else if (yuvImage == null) { yuvImage = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2); Log.i(LOG_TAG, "create yuvImage"); OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage(); yuvIplimage = converter.convert(yuvImage); } Log.i(LOG_TAG, "ffmpeg_url: " + ffmpeg_link); recorder = new FFmpegFrameRecorder(ffmpeg_link, imageWidth, imageHeight, 1); recorder.setFormat("flv"); recorder.setSampleRate(sampleAudioRateInHz); // Set in the surface changed method recorder.setFrameRate(frameRate); Log.i(LOG_TAG, "recorder initialize success"); audioRecordRunnable = new AudioRecordRunnable(); audioThread = new Thread(audioRecordRunnable); runAudioThread = true; } @Override public void onPreviewFrame(byte[] data, Camera camera) { if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) { startTime = System.currentTimeMillis(); return; } if (RECORD_LENGTH > 0) { int i = imagesIndex++ % images.length; yuvImage = images[i]; timestamps[i] = 1000 * (System.currentTimeMillis() - startTime); } /* get video data */ int destWidth = 640; if (yuvIplimage != null && recording) { ByteBuffer bb = yuvIplimage.getByteBuffer(); // resets the buffer int start = 2*((imageWidth-destWidth)/4); // this must be even for (int row=0; row<imageHeight*3/2; row++) { bb.put(data, start, destWidth); start += imageWidth; } try { long t = 1000 * (System.currentTimeMillis() - startTime); if (t > recorder.getTimestamp()) { recorder.setTimestamp(t); } recorder.record(yuvImage); } catch (FFmpegFrameRecorder.Exception e) { Log.e(LOG_TAG, "problem with recorder():", e); } } } 

But videos created using this method contain only green frames.

+1
source share
1 answer

Firstly, this is pre-processing, not post-processing of the video.

I do not know what changes you need to configure the solution for the new version of javacv , I hope that they will leave the library backward compatible.

Your buffer is 640 pixels wide and 480 pixels high. You want to cut 480x480.

enter image description here

This means that you need a loop that will copy each line in IplImage, something like this:

 private int imageWidth = 640; private int imageHeight = 480; private int destWidth = 480; @Override public void onPreviewFrame(byte[] data, Camera camera) { if (data.length != imageWidth*imageHeight) { Camera.Size sz = camera.getPreviewSize(); imageWidth = sz.width; imageHeight = sz.height; destWidth = imageHeight; } ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0); // resets the buffer int start = 2*((imageWidth-destWidth)/4); // this must be even for (int row=0; row<imageHeight*3/2; row++) { bb.put(data, start, destWidth); start += imageWidth; } recorder.record(yuvImage); 
+1
source

All Articles