Image data from Android camera2 API upside down and glued on Galaxy S5

I am implementing an application that uses real-time image processing on live camera images. It worked with restrictions, using now obsolete android.hardware.Camera; to increase flexibility and performance I would like to use the new android.hardware.camera2 API. However, I am having problems getting raw image data. This is on a Samsung Galaxy S5. (Unfortunately, I do not have another Lollipop device suitable for testing on other equipment).

I got the general structure (with inspiration from the “HdrViewFinder” and “Camera2Basic” samples), and a live image is drawn on the screen via SurfaceTexture and GLSurfaceView. However, I also need to access image data (grayscale is just fine, at least for now) for custom image processing. According to the documentation of StreamConfigurationMap.isOutputSupportedFor (class) , the recommended surface for receiving image data directly will be ImageReader (right?).

So, I configured my capture requests as:

mSurfaceTexture.setDefaultBufferSize(640, 480); mSurface = new Surface(surfaceTexture); ... mImageReader = ImageReader.newInstance(640, 480, format, 2); ... List<Surface> surfaces = new ArrayList<Surface>(); surfaces.add(mSurface); surfaces.add(mImageReader.getSurface()); ... mCameraDevice.createCaptureSession(surfaces, mCameraSessionListener, mCameraHandler); 

and in the onImageAvailable callback for ImageReader, I access the data as follows:

 Image img = reader.acquireLatestImage(); ByteBuffer grayscalePixelsDirectByteBuffer = img.getPlanes()[0].getBuffer(); 

... but for now (as said), viewing a live image is working, something is wrong with the data that I get here (or with the way I get it). According to

 mCameraInfo.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputFormats(); 

... the following ImageFormats must be supported: NV21, JPEG, YV12, YUV_420_888. I tried everything (locked for the "format" above), all support the specified resolution according to getOutputSizes(format) , but none of them give the desired result:

  • NV21: ImageReader.newInstance throws java.lang.IllegalArgumentException: NV21 format is not supported
  • JPEG: It really works, but it seems like it doesn't make sense for a real-time application to skip JPEG encoding and decode for each frame ...
  • YV12 and YUV_420_888: this is the strangest result - I see an image of a grayscale image, but it is flipped vertically (yes, flipped, not rotated!) And significantly spread (scaled significantly horizontally, but not vertically).

What am I missing here? What makes an image flip and shrink? How can I get a geometrically correct grayscale buffer? Should I use a different type of surface (instead of ImageReader)?

Any hints appreciated.

+8
android android-5.0-lollipop android-camera galaxy
source share
2 answers

I found an explanation (although not necessarily a satisfactory solution): it turns out that the aspect ratio of the sensor matrix is ​​16: 9 (found through mCameraInfo.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE); ).

At least when you ask for YV12 / YUV_420_888, the streamer does not crop the image in any way, but instead scales it unevenly to achieve the desired frame size. Images have the correct proportions when requesting a 16: 9 format (of which, unfortunately, there are only two options with a higher resolution). It seems a little strange to me - it seems that this does not happen when requesting JPEG or with similar API functions of the old camera or for still images; and I'm not sure if it would be useful for unevenly scaled frames.

I feel that this is not a very satisfactory solution because it means that you cannot rely on a list of output formats, but instead you first need to find the size of the sensor, find the formats with the same aspect ratio, and then reduce the image yourself (as needed ) ...

I do not know if this is the expected result here or the "feature" of the S5. Comments or suggestions are still welcome.

+4
source share

I had the same problem and found a solution. The first part of the problem is setting the size of the surface buffer:

  // We configure the size of default buffer to be the size of camera preview we want. //texture.setDefaultBufferSize(width, height); 

Here the image is distorted, not in the camera. You should comment on this, and then set the image to scale when displayed.

  int[] rgba = new int[width*height]; //getImage(rgba); nativeLoader.convertImage(width, height, data, rgba); Bitmap bmp = mBitmap; bmp.setPixels(rgba, 0, width, 0, 0, width, height); Canvas canvas = mTextureView.lockCanvas(); if (canvas != null) { //canvas.drawBitmap(bmp, 0, 0, null );//configureTransform(width, height), null); //canvas.drawBitmap(bmp, configureTransform(width, height), null); canvas.drawBitmap(bmp, new Rect(0,0,320,240), new Rect(0,0, 640*2,480*2), null ); //canvas.drawBitmap(bmp, (canvas.getWidth() - 320) / 2, (canvas.getHeight() - 240) / 2, null); mTextureView.unlockCanvasAndPost(canvas); } image.close(); 

You can play with values ​​to fine-tune the solution to your problem.

0
source share

All Articles