The importance of confusion matrix in machine learning

joy_chiorlu

Sarima Chiorlu

Posted on May 18, 2022

The importance of confusion matrix in machine learning

As a machine learning engineer, it is important to know how well our model performs on our predictions. This aids us in finding out if we have an overfitting problem, and correcting it early on while building our model. One of the ways in which machine learning engineers test the accuracy of their model is through a technique known as confusion matrix.

What is a Confusion matrix?
A confusion matrix is a technique for measuring performance in machine learning classification model. It is a type of table that allows you to determine the performance of the classification model on a set of test data in order to determine the true values. It compares the predicted value with the actual value.
It is extremely useful for measuring Recall, Precision, Specificity, Accuracy, and most importantly AUC-ROC curves.

In a confusion matrix, there are four types of possible outcomes we can have. These are:
True Positive
False Positive
True Negative
False Negative

Image description

True positive: Our predicted value matches the real value. Our predicted value was positive and the real value was positive

False positive: The predicted value did not match the real value. Our predicted value was positive but the actual value was negative

True Negative: The predicted value matches the real value. Our predicted value was negative and the actual value was negative.

False Negative: The predicted value did not match the real value. Our predicted value was negative, the real value was positive.

For instance:
We are trying to predict how many people experience side-effects after taking a certain food (Food A), we have our data set, we have cleaned our dataset, trained it and done our preliminary test using our validation data set and our model is saying that "88.8%" of people who are not cancer-prone experience that side-effect after taking food A. However, our model is actually doing the direct opposite. Considering the high death-rate we've had due to cancer over the years, we should instead be predicting the positive values, so as to ensure that such individuals can go see a doctor whilst it's all still in the early stage. Now, how do we do that. We do that via:

Recall: The term recall refers to the proportion of genuine positive examples identified by a predictive model.
Mathematically: Image description

Precision is similar to recall in that it is concerned with the positive examples predicted by your model. Precision, on the other hand, measures something different.

Precision is concerned with the number of genuinely positive examples identified by your model in comparison to all positive examples labeled by it.
Mathematically: Image description

If the distinction between recall and precision is still unclear to you, consider this:

Precision provides an answer to the following question: What percentage of all selected positive examples is genuinely positive?

This question is answered by recall: What percentage of all total positive examples in your dataset did your model identify?
Image description

Let's get all practical
From the image shown above, let's calculate out our accuracy

Accuracy = Image description

We have an accuracy of 88.8%.
Precision of 81.4
Recall of 95.5

It should be noted that as we try to increase the precision of our model, the recall goes down. They are inversely proportional to each other. We use what is known as the F1-score to get an average (the harmonic mean) of our precision and recall. This aids in evaluating our performance
Let's see how we had that confusion matrix. Let's code it out:

I am going to be assuming that you have trained your model and tested it. We are just trying to check for how well our model has performed.

from sklearn.metrics import confusion_matrix
import itertools
import matplotlib.pyplot as plt
from random import randint
from sklearn.utils import shuffle
Enter fullscreen mode Exit fullscreen mode

We are importing the necessary libraries we would need

cm = confusion_matrix(y_true=test_labels, y_pred=rounded_predictions)
Enter fullscreen mode Exit fullscreen mode

y_true: correct value, the data we are going to be using to check on our prediction. This is your test data
y_pred: This is the values our model has predicted

def plot_confusion_matrix(cm, classes, normalize = False, title='Confusion matrix',cmap=plt.cm.Blues):
  plt.imshow(cm, interpolation='nearest',cmap=cmap)
  plt.title = title
  plt.colorbar()
  tick_marks = np.arange(len(classes))
  plt.colorbar()
  tick_marks = np.arange(len(classes))
  plt.xticks(tick_marks, classes, rotation=45)
  plt.yticks(tick_marks, classes)
  if normalize:
    cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
    print('Normalized confusion matrix')
  else:
    print('Confusion matrix, without normalization')

  print(cm)

  thresh = cm.max() / 2.
  for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
    plt.text(j, i, cm[i, j],
             horizontalalignment='center',
             color="white" if cm[i, j] > thresh else "black")

  plt.tight_layout()
  plt.ylabel('True label')
  plt.xlabel('Predicted label')
Enter fullscreen mode Exit fullscreen mode

If you wish to plot the confusion matrix graph. You could also easily use the code below:

y_true = [1, 1, 0, 1, 0, 0, 1, 0, 0, 0] #Assuming this is the data we have in our test data set
y_pred = [1, 0, 0, 1, 0, 0, 1, 1, 1, 0] #This is the result our model has predicted for us
result = confusion_matrix(expected, predicted)
print(result)
Enter fullscreen mode Exit fullscreen mode

Conclusion
Through the use of the confusion matrix, we can ensure that our model is performing well. Due to this invention by Karl Pearson, which was then known as the contingency table, we had always been able to measure the performance of our models as machine learning engineers. This has helped us better train our models.

💖 💪 🙅 🚩
joy_chiorlu
Sarima Chiorlu

Posted on May 18, 2022

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related