I am trying to record a video in the resolution of 480 * 480, as in a vine, using javacv. As a starting point, I used the sample provided at https://github.com/bytedeco/javacv/blob/master/samples/RecordActivity.java The video is recorded (but not in the desired resolution) and saved.
But the problem is that the resolution of 480 * 480 is not supported initially in android. Therefore, pre-processing is necessary to obtain the video in the desired resolution.
So, as soon as I was able to record the video using the sample code provided by javacv, the next task was how to pre-process the video. The study found that effective cropping is possible when the desired final image width is the same as the width of the recorded image. Such a solution was provided in the SO question, Recording video on Android using JavaCV (Updated 2014 02 17) . I changed the PreviewFrame method as suggested in this answer.
@Override public void onPreviewFrame(byte[] data, Camera camera) { if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) { startTime = System.currentTimeMillis(); return; } if (RECORD_LENGTH > 0) { int i = imagesIndex++ % images.length; yuvImage = images[i]; timestamps[i] = 1000 * (System.currentTimeMillis() - startTime); } imageWidth = 640; imageHeight = 480 int finalImageHeight = 360; if (yuvImage != null && recording) { ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0);
Also note that this solution was provided for an older version of javacv. The video resulted in a yellowish overlay, covering 2/3 of the part. Also on the left side was an empty section, because the video was not properly cut.
So my question is what is the most suitable solution for trimming videos using the latest javacv?
Code after changes proposed by Alex Kon
@Override public void onPreviewFrame(byte[] data, Camera camera) { if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) { startTime = System.currentTimeMillis(); return; } if (RECORD_LENGTH > 0) { int i = imagesIndex++ % images.length; yuvImage = images[i]; timestamps[i] = 1000 * (System.currentTimeMillis() - startTime); } imageWidth = 640; imageHeight = 480; destWidth = 480; if (yuvImage != null && recording) { ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0);
Screenshot of video generated using this code (destWidth 480),

Next, I tried to capture a video with destWidth specified as 639. The result is

When destWidth is 639, the video is repeated twice. When it is 480, the content is repeated 5 times, and the green overlay and distortion are greater.
Also, when destWidth = imageWidth, the video is recorded correctly. those. for 640 * 480 there is no repetition of the contents of the video and the green overlay.
Convert Frame to IplImage
When this question was first asked, I missed the mention that the write method in FFmpegFrameRecorder now accepts an object of type Frame, whereas before it was an IplImage object. So I tried applying Alex Cohn's solution by converting Frame to IplImage.
//--------------------------------------- // initialize ffmpeg_recorder //--------------------------------------- private void initRecorder() { Log.w(LOG_TAG,"init recorder"); imageWidth = 640; imageHeight = 480; if (RECORD_LENGTH > 0) { imagesIndex = 0; images = new Frame[RECORD_LENGTH * frameRate]; timestamps = new long[images.length]; for (int i = 0; i < images.length; i++) { images[i] = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2); timestamps[i] = -1; } } else if (yuvImage == null) { yuvImage = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2); Log.i(LOG_TAG, "create yuvImage"); OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage(); yuvIplimage = converter.convert(yuvImage); } Log.i(LOG_TAG, "ffmpeg_url: " + ffmpeg_link); recorder = new FFmpegFrameRecorder(ffmpeg_link, imageWidth, imageHeight, 1); recorder.setFormat("flv"); recorder.setSampleRate(sampleAudioRateInHz); // Set in the surface changed method recorder.setFrameRate(frameRate); Log.i(LOG_TAG, "recorder initialize success"); audioRecordRunnable = new AudioRecordRunnable(); audioThread = new Thread(audioRecordRunnable); runAudioThread = true; } @Override public void onPreviewFrame(byte[] data, Camera camera) { if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) { startTime = System.currentTimeMillis(); return; } if (RECORD_LENGTH > 0) { int i = imagesIndex++ % images.length; yuvImage = images[i]; timestamps[i] = 1000 * (System.currentTimeMillis() - startTime); } /* get video data */ int destWidth = 640; if (yuvIplimage != null && recording) { ByteBuffer bb = yuvIplimage.getByteBuffer(); // resets the buffer int start = 2*((imageWidth-destWidth)/4); // this must be even for (int row=0; row<imageHeight*3/2; row++) { bb.put(data, start, destWidth); start += imageWidth; } try { long t = 1000 * (System.currentTimeMillis() - startTime); if (t > recorder.getTimestamp()) { recorder.setTimestamp(t); } recorder.record(yuvImage); } catch (FFmpegFrameRecorder.Exception e) { Log.e(LOG_TAG, "problem with recorder():", e); } } }
But videos created using this method contain only green frames.