Facial mining in OpenCV 3.0. Can anyone suggest some good open source libraries that let me extract personal landmarks?

I am currently using OpenCV3.0 with the hope that I can create a program that does 3 things. Firstly, it finds faces inside a live video feed. Secondly, retrieves landmark locations using ASM or AAM. Finally, it uses SVM to classify facial expressions on the faces of faces in videos.

I have done a lot of research, but I can not find anywhere more suitable open-source AAM or ASM library to perform this function. Also, if possible, I would like to be able to train AAM or ASM to extract the specific landmarks that I need. For example, all the numbered dots in the picture below: www.imgur.com/XnbCZXf

If there are any alternatives to what I suggested, in order to get the required functionality, feel free to offer them to me.

Thanks in advance for any answers, all the tips can help me with this project.

+8
c ++ 11 opencv
source share
4 answers

In the comments, I see that you prefer to train your own ground-based face detector using the dlib library. You had a few questions regarding which dlib training kit is used to create the shape_predictor_68_face_landmarks.dat model it provided.

Some pointers:

  • The author (Davis King) stated that he used annotated images from the iBUG 300-W dataset. This dataset has a total of 11167 images annotated with a 68-point agreement. As a standard trick, it also reflects each image in order to effectively double the size of the installation set, i.e. 11,167 * 2 = 22334 images. Here's a link to the dataset: http://ibug.doc.ic.ac.uk/resources/facial-point-annotations/
    • Note: The iBUG 300-W dataset includes two datasets that are not freely / publicly available: XM2VTS and FRGCv2. Unfortunately, these images make up the bulk of the ibug 300-W (7310 images, or 65.5%).
    • Original paper is for HELEN, AFW, and LFPW datasets only. Thus, you should be able to create a reasonably good model only on public images (HELEN, LFPW, AFW, IBUG), i.e. 3857 images.
      • If you google "one millisecond of face alignment kazemi", paper (and the project page) will be the top hit.

You can learn more about the learning process by reading the comment section of this dlib blog post. In particular, he briefly discusses the options that he chose for training: http://blog.dlib.net/2014/08/real-time-face-pose-estimation.html

Given the size of the training set (thousands of images), I don’t think that you will get acceptable results with just a few images. Fortunately, there are many public face datasets, including the dataset linked above :)

Hope this helps!

+7
source share

AAM and ASM are pretty old school, and the results are a little disappointing.

Most face trackers use a cascade of patches or deep learning. You have a DLib that works pretty well (+ BSD license) with this demo , some others on github or a bunch of APIs, like this one that can be used for free.

You can also watch my project using C ++ / OpenCV / DLib with all the functionality that you quoted and are functioning perfectly.

+2
source share
  • Try Stasm4.0.0 . This gives approximately 77 points on the face.
+1
source share

I advise you to use the FaceTracker library. It is written in C ++ using OpenCV 2.x. You will not be disappointed with this.

0
source share

All Articles