If you are interested in another way to do this, you can do the following. This method is theoretically more sonorous, but not so straightforward.
Mentioning the mean and std, it seems that you are referencing data that you think should be distributed in some way. For example, the data you are observing is Gaussian distributed. You can then use Symmetrised Kullback-Leibler_divergence as a measure of the distance between these distributions. Then you can use something like k-nearest neighbor for classification.
For two probability densities p and q, you have KL (p, q) = 0 only if p and q are the same. However, KL is not symmetrical - therefore, to have the correct measure of distance, you can use
distance (p1, p2) = KL (p1, p2) + KL (p1, p2)
For the Gaussians, KL (p1, p2) = {(μ1 - μ2) ^ 2 + σ1 ^ 2 - σ2 ^ 2} / (2.σ2 ^ 2) + ln (σ2 / σ1). (I stole it from here , where you can also find the deviation :)
In short:
Given a set of training D (middle, std, class) tuples and a new pair p = (average, std), we find q in D for which the distance (d, p) is minimal and returns this class.
For me, this is better than the multi-core SVM approach, as the classification method is not so arbitrary.
bayer
source share