How to get the current frame (as a bitmap) for an android face in the Tracker event?

I have a standard com.google.android.gms.vision.Tracker example running successfully on my Android device, and now I need a postprocess image to search for the iris of the current person that was notified in the Tracker event methods.

So, how do I get a bitmap that matches exactly com.google.android.gms.vision.face.Face received in Tracker events? It also means that the final bitmap must match the resolution of the webcam, not the resolution of the screen.

One of the bad alternative solutions is to call takePicture every few ms on my CameraSource and process this image separately using FaceDetector. Although this works, I ran into a problem that the video stream freezes during takepicture, and I get a lot of GC_FOR_ALLOC messages due to single bmp frontetector memory loss.

+6
source share
1 answer

You need to create your own version of Face tracker, which will expand the google.vision face detector. In your mainActivity or FaceTrackerActivity class (in the Google tracking sample), create your version of the FaceDetector class as follows:

class MyFaceDetector extends Detector<Face> { private Detector<Face> mDelegate; MyFaceDetector(Detector<Face> delegate) { mDelegate = delegate; } public SparseArray<Face> detect(Frame frame) { YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null); ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream); byte[] jpegArray = byteArrayOutputStream.toByteArray(); Bitmap TempBitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length); //TempBitmap is a Bitmap version of a frame which is currently captured by your CameraSource in real-time //So you can process this TempBitmap in your own purposes adding extra code here return mDelegate.detect(frame); } public boolean isOperational() { return mDelegate.isOperational(); } public boolean setFocus(int id) { return mDelegate.setFocus(id); } } 

Then you need to join your own FaceDetector using CameraSource by changing the CreateCameraSource method as follows:

 private void createCameraSource() { Context context = getApplicationContext(); // You can use your own settings for your detector FaceDetector detector = new FaceDetector.Builder(context) .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS) .setProminentFaceOnly(true) .build(); // This is how you merge myFaceDetector and google.vision detector MyFaceDetector myFaceDetector = new MyFaceDetector(detector); // You can use your own processor myFaceDetector.setProcessor( new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory()) .build()); if (!myFaceDetector.isOperational()) { Log.w(TAG, "Face detector dependencies are not yet available."); } // You can use your own settings for CameraSource mCameraSource = new CameraSource.Builder(context, myFaceDetector) .setRequestedPreviewSize(640, 480) .setFacing(CameraSource.CAMERA_FACING_FRONT) .setRequestedFps(30.0f) .build(); } 
+2
source

All Articles