Python scikit learning tiered performance metrics?

I ran the Random Forest classifier for my multi-class multi-window output variable. I got the result below.

My y_test values Degree Nature 762721 1 7 548912 0 6 727126 1 12 14880 1 12 189505 1 12 657486 1 12 461004 1 0 31548 0 6 296674 1 7 121330 0 17 predicted output : [[ 1. 7.] [ 0. 6.] [ 1. 12.] [ 1. 12.] [ 1. 12.] [ 1. 12.] [ 1. 0.] [ 0. 6.] [ 1. 7.] [ 0. 17.]] 

Now I want to check the performance of my classifier. I found that for multi-class multi-paths, “Hamming loss or jaccard_similarity_score” are good indicators. I tried to calculate it, but I got a value error.

 Error: ValueError: multiclass-multioutput is not supported 

Below the line I tried:

 print hamming_loss(y_test, RF_predicted) print jaccard_similarity_score(y_test, RF_predicted) 

Thanks,

+5
source share
1 answer

To calculate the unsupported loss of interference for a multiclass / multipath, you could:

 import numpy as np y_true = np.array([[1, 1], [2, 3]]) y_pred = np.array([[0, 1], [1, 2]]) np.sum(np.not_equal(y_true, y_pred))/float(y_true.size) 0.75 

You can also get confusion_matrix for each of two labels:

 from sklearn.metrics import confusion_matrix, precision_score np.random.seed(42) y_true = np.vstack((np.random.randint(0, 2, 10), np.random.randint(2, 5, 10))).T [[0 4] [1 4] [0 4] [0 4] [0 2] [1 4] [0 3] [0 2] [0 3] [1 3]] y_pred = np.vstack((np.random.randint(0, 2, 10), np.random.randint(2, 5, 10))).T [[1 2] [1 2] [1 4] [1 4] [0 4] [0 3] [1 4] [1 3] [1 3] [0 4]] confusion_matrix(y_true[:, 0], y_pred[:, 0]) [[1 6] [2 1]] confusion_matrix(y_true[:, 1], y_pred[:, 1]) [[0 1 1] [0 1 2] [2 1 2]] 

You can also calculate precision_score like this (or recall_score same way):

 precision_score(y_true[:, 0], y_pred[:, 0]) 0.142857142857 
+3
source

All Articles