Following the example of wikipedia . If the classification system has been trained to distinguish between cats and non-cats, the confusion will summarize the test results of the algorithm for further verification. Assuming a sample of 27 animals - 8 cats and 19 not cats, the resulting confusion may look like the one shown in the table below:

With sclear
If you want to preserve the structure of the confusion matrix on Wikipedia, first go to the predicted values and then to the actual class.
from sklearn.metrics import confusion_matrix y_true = [0,0,0,1,0,0,1,0,0,1,0,1,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0] y_pred = [0,0,0,1,0,0,1,0,0,1,0,1,0,0,0,0,1,0,0,0,0,1,0,1,0,0,0] confusion_matrix(y_pred, y_true, labels=[1,0]) Out[1]: array([[ 5, 2], [ 3, 17]], dtype=int64)
Another way with a pandas crosstab
true = pd.Categorical(list(np.where(np.array(y_true) == 1, 'cat','non-cat')), categories = ['cat','non-cat']) pred = pd.Categorical(list(np.where(np.array(y_pred) == 1, 'cat','non-cat')), categories = ['cat','non-cat']) pd.crosstab(pred, true, rownames=['pred'], colnames=['Actual'], margins=False, margins_name="Total") Out[2]: Actual cat non-cat pred cat 5 2 non-cat 3 17
I hope this serves you