What are some of the algorithms associated with detecting user gestures based on skeleton movements? The ones I know about include:
a) Hidden Markov models. You define a number of parameters for the HMM, such as arm position, elbow angle, etc. To submit to your HMM. And then spend some time learning the system, adjusting the parameters until it can accurately recognize your gestures. I believe this is how Wii gestures are generally made. A good example with kinect here .
b) Connect the dots. If you have a limited vocabulary of gestures, you can set up collision spheres along the path that each hand will normally follow. You may have a gesture if they don't go fast enough.
Both methods will probably require a lot of tweaking to get the success / failure rate the way you want. I am wondering if there are other approaches that I donβt know about, as well as the benefits of each.
source share