Accuracy vs Precision vs Recall
Accuracy vs Precision vs Recall
What are the classification metrics used to evaluate classification algorithms or models?
- Accuracy
- Precision
- Recall
- F1-score
What is the analogy that can be used to describe why we use classification metrics?
Classification performance metrics are analogous to a report card for your kids. If you want to quickly understand how well your child is doing and if he needs any help, you'd look at his grades.
e.g., If your child scores an A in Math but a C in Science, you know that he needs help in Science but could be left alone in Math.
Similarly, these four metrics provide an overview of how well your classification algorithms are performing and whether they need to be tuned or not.
1) Accuracy
What is accuracy?
Accuracy is a performance metric that tells you how well a classification algorithm correctly classified data when looking at the entire dataset.Why do we need accuracy?
How do I calculate accuracy?
Accuracy = TP + TN / (TP + TN + FP + FN)When is accuracy a poor metric to use for evaluation?
When there are imbalanced data sets i.e., data outliers. This is similar to calculating the average income in an area like Seattle when multiple billionaires such as Jeff Bezos (net worth $135 billion) and Bill Gates ($93 billion) live in that area.e.g., 9,990 patients have cancer but 10 are cancer free.
Comments
Post a Comment