Feature extraction using PCA

My task is to recognize the gesture. I want to do this by training a vector support machine using the functions extracted from the PCA (Principal component Analysis). But I got a little confused in this procedure.

After going through various articles, I figured out these steps.

  • Take the 'd' number of images (n * n) of the same gesture.
  • Convert each n * n image to a sigle string.
  • We form a matrix of order d * (n * n).
  • Calculate eigenvalues ​​and eigenvectors.
  • Use the top "k" eigenvectors to form a subspace.
  • Project the image from the original size n * n into the dimension 'k'.

Question:

1) I have a set of 100 gestures, and doing the above 6 steps will give me 100 subspaces. My testing needs to be done in real time to find which class comes into the gesture. On which supspace do I project each video frame to reduce the size for submitting it to the classifier?

Thanks in advance.

+5
source share

All Articles