You need to create your own version of Face tracker, which will expand the google.vision face detector. In your mainActivity or FaceTrackerActivity class (in the Google tracking sample), create your version of the FaceDetector class as follows:
class MyFaceDetector extends Detector<Face> { private Detector<Face> mDelegate; MyFaceDetector(Detector<Face> delegate) { mDelegate = delegate; } public SparseArray<Face> detect(Frame frame) { YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null); ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream); byte[] jpegArray = byteArrayOutputStream.toByteArray(); Bitmap TempBitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Then you need to join your own FaceDetector using CameraSource by changing the CreateCameraSource method as follows:
private void createCameraSource() { Context context = getApplicationContext(); // You can use your own settings for your detector FaceDetector detector = new FaceDetector.Builder(context) .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS) .setProminentFaceOnly(true) .build(); // This is how you merge myFaceDetector and google.vision detector MyFaceDetector myFaceDetector = new MyFaceDetector(detector); // You can use your own processor myFaceDetector.setProcessor( new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory()) .build()); if (!myFaceDetector.isOperational()) { Log.w(TAG, "Face detector dependencies are not yet available."); } // You can use your own settings for CameraSource mCameraSource = new CameraSource.Builder(context, myFaceDetector) .setRequestedPreviewSize(640, 480) .setFacing(CameraSource.CAMERA_FACING_FRONT) .setRequestedFps(30.0f) .build(); }
source share