Member-only story
Precision and recall are two important metrics used to evaluate the performance of binary classification models. These metrics are particularly relevant in scenarios where there is an imbalance between the classes (i.e., one class is much more prevalent than the other).
Review:
- True Positives (TP): Number of samples correctly predicted as “positive.”
- False Positives (FP): Number of samples wrongly predicted as “positive.”
- True Negatives (TN): Number of samples correctly predicted as “negative.”
- False Negatives (FN): Number of samples wrongly predicted as “negative.”
Let’s delve into each metric and discuss scenarios where emphasizing one over the other is preferable:
I. Precision:
1. Formula:
Precision = TP / (TP + FP)
- Precision focuses on the accuracy of the positive predictions. It answers the question: “Of all the instances predicted as positive, how many were actually positive?”
- High precision indicates that the model has a low rate of false positives.