Metrics to measure ML Model

learn about different metrics to measure ML Model

Posted by Nivin Anton Alexis Lawrence on November 23, 2018

Recently by the same author:


Introduction to Functional Programming [Part 2]

Think Functional


Nivin Anton Alexis Lawrence

Human

I always believe measurement is underrated. The people who spent years to come up with these techniques are not credited enough. I feel measurement is the foundation for any invention to happen. If we don’t have system that can measure and provides us insights on some algorithm or model we invented? then we would have stuck in complete randomness of invention with zero progress. - Nivin Lawrence

Here is the list of metrics I know,

Confusion Matrix

Say, we have a machine learning model that can predict a cat. Let’s say we have testing data of 1000 samples, 500 of them being cat and 500 not. We have four possibilities when we run this model

  • model exactly predicts the given sample as cat (True Positive)
  • model exactly predicts the given sample as not cat (True Negative)
  • model wrongly predicts the given sample as cat (False Positive)
  • model fails to predicts the given sample as cat (False Negative)

We can represent this four possibilities in matrix called confusion matrix, that looks something like this,

empty true label false label
positive prediction 50 450
negative prediction 460 40

We will be using the above data to explain the below concepts.

Accuracy

[Accuracy = \frac{ TP + TN }{ TP + TN + FP + FN } = \frac{ 50 + 460 }{ 50 + 460 + 450 + 40 }]

Precision

[Precision = \frac{ TP }{ TP + FP } = \frac{ 50 }{ 50 + 450 }]

Recall

[Recall = \frac{ TP }{ TP + FN } = \frac{ 50 }{ 50 + 40 }]

  • Sensitivity
  • AUC (Area Under the Curve)
  • ROC
  • Correlation
  • Calibration
  • F1 Score