To calculate the unsupported loss of interference for a multiclass / multipath, you could:
import numpy as np y_true = np.array([[1, 1], [2, 3]]) y_pred = np.array([[0, 1], [1, 2]]) np.sum(np.not_equal(y_true, y_pred))/float(y_true.size) 0.75
You can also get confusion_matrix for each of two labels:
from sklearn.metrics import confusion_matrix, precision_score np.random.seed(42) y_true = np.vstack((np.random.randint(0, 2, 10), np.random.randint(2, 5, 10))).T [[0 4] [1 4] [0 4] [0 4] [0 2] [1 4] [0 3] [0 2] [0 3] [1 3]] y_pred = np.vstack((np.random.randint(0, 2, 10), np.random.randint(2, 5, 10))).T [[1 2] [1 2] [1 4] [1 4] [0 4] [0 3] [1 4] [1 3] [1 3] [0 4]] confusion_matrix(y_true[:, 0], y_pred[:, 0]) [[1 6] [2 1]] confusion_matrix(y_true[:, 1], y_pred[:, 1]) [[0 1 1] [0 1 2] [2 1 2]]
You can also calculate precision_score like this (or recall_score same way):
precision_score(y_true[:, 0], y_pred[:, 0]) 0.142857142857
source share