How to change the orientation of captured bytes [] frames via the onPreviewFrame callback?

I tried to search this question a lot, but I never saw satisfactory answers, so now I have one last hope.

I have an onPreviewFrame . That gives byte[] raw frames with a supported preview format ( NV21 with type H.264 ).

Now the problem is that the callback always starts giving byte[] frames from a fixed orientation, whenever the device rotates, it does not reflect on the captured byte[] frames. I tried with setDisplayOrientation and setRotation , but these api only reflect the preview, which is not displayed at all on captured byte [] frames.

Android docs even say Camera.setDisplayOrientation only affects the preview preview, not the frame bytes:

This does not affect the order of the byte array passed to onPreviewFrame (byte [], Camera), JPEG images, or recorded videos.

Finally, is there a way at any level of the API to change the orientation of the byte[] frames?

+7
source share
2 answers

One possible way if you do not need a format is to use the YuvImage class to get a JPEG buffer, use this buffer to create a Bitmap and rotate it to the appropriate angle. Something like that:

 @Override public void onPreviewFrame(byte[] data, Camera camera) { Size previewSize = camera.getParameters().getPreviewSize(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); byte[] rawImage = null; // Decode image from the retrieved buffer to JPEG YuvImage yuv = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null); yuv.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), YOUR_JPEG_COMPRESSION, baos); rawImage = baos.toByteArray(); // This is the same image as the preview but in JPEG and not rotated Bitmap bitmap = BitmapFactory.decodeByteArray(rawImage, 0, rawImage.length); ByteArrayOutputStream rotatedStream = new ByteArrayOutputStream(); // Rotate the Bitmap Matrix matrix = new Matrix(); matrix.postRotate(YOUR_DEFAULT_ROTATION); // We rotate the same Bitmap bitmap = Bitmap.createBitmap(bitmap, 0, 0, previewSize.width, previewSize.height, matrix, false); // We dump the rotated Bitmap to the stream bitmap.compress(CompressFormat.JPEG, YOUR_JPEG_COMPRESSION, rotatedStream); rawImage = rotatedStream.toByteArray(); // Do something we this byte array } 
+3
source

I changed the onPreviewFrame method of this open source Android Touch-To-Record library to transfer and resize the captured frame.

I defined "yuvIplImage" as the next in my setCameraParams () method.

 IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2); 

This is my onPreviewFrame () method:

 @Override public void onPreviewFrame(byte[] data, Camera camera) { long frameTimeStamp = 0L; if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L) { frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime); } else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp) { frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime; } else { long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L; frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp; FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp; } synchronized(FragmentCamera.mVideoRecordLock) { if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null) { FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime; if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp) { FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp(); } try { yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData()); IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720 IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4); IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4); int[] _temp = new int[mPreviewSize.width * mPreviewSize.height]; Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width, mPreviewSize.height); bgrImage.getIntBuffer().put(_temp); opencv_core.cvTranspose(bgrImage, transposed); opencv_core.cvFlip(transposed, transposed, 1); opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height)); opencv_core.cvCopy(transposed, squared, null); opencv_core.cvResetImageROI(transposed); videoRecorder.setTimestamp(lastSavedframe.getTimeStamp()); videoRecorder.record(squared); } catch(com.googlecode.javacv.FrameRecorder.Exception e) { e.printStackTrace(); } } lastSavedframe = new SavedFrames(data, frameTimeStamp); } } 

This code uses the "YUV_NV21_TO_BGR" method, which I found from this link

This method is mainly used for resolution, which I call "The problem with the green devil on Android." You can see how other Android developers are facing the same problem in other SO threads. Before adding the "YUV_NV21_TO_BGR" method, when I just transposed the transpose of YuvIplImage, more importantly, the combination of transposition, flip (with or without resizing), the resulting video had a greenish result. This method "YUV_NV21_TO_BGR" saved the day. Thanks to @David Han from the google groups stream.

You should also be aware that all this processing (transposing, flipping and resizing) in onPreviewFrame takes a lot of time, which leads to a very serious hit on your frame rate per second (FPS). When I used this code, inside the onPreviewFrame method, the resulting FPS of the recorded video was up to 3 frames / sec from 30 frames per second.

I would advise against using this approach. Rather, you can start processing after recording (transpose, flip, and resize) your video using JavaCV in AsyncTask. Hope this helps.

+1
source

All Articles