Sensitivity and Specificity:
Sensitivity (also called the true positive rate, or the recall rate in some fields) measures the proportion of actual positives which are correctly identified as such (e.g. the percentage of sick people who are correctly identified as having the condition), and is complementary to the false negative rate.
Specificity (sometimes called the true negative rate) measures the proportion of negatives which are correctly identified as such (e.g. the percentage of healthy people who are correctly identified as not having the condition), and is complementary to the false positive rate. A perfect predictor would be described as 100% sensitive (e.g. predicting all people from the sick group as sick) and 100% specific (e.g. not predicting anyone from the healthy group as sick).
Let’s imagine a study evaluating a new test that screens people for a disease. Each person taking the test can only have two outcomes, they can either have the disease or or don’t have the disease. The test outcome can be positive (predicting that the person has the disease) or negative (predicting that the person does not have the disease). The test results for each subject may or may not match the subject’s actual status. In that setting:
- True positive: Sick people correctly diagnosed as sick
- False positive: Healthy people incorrectly identified as sick
- True negative: Healthy people correctly identified as healthy
- False negative: Sick people incorrectly identified as healthy
In general, Positive = identified and negative = rejected. Therefore:
- True positive = correctly identified
- False positive = incorrectly identified
- True negative = correctly rejected
- False negative = incorrectly rejected
Sensitivity relates to the test’s ability to identify diseased patients as diseased and identify a condition correctly, therefor Rule-IN a disease. Consider the example of a medical test used to identify a disease. Sensitivity of the test is the proportion of people known to have the disease, who test positive for it. Mathematically, this can be expressed as:
Specificity relates to the test’s ability to identify healthy people as healthy and exclude a condition correctly therefor Rule-OUT a disease. Consider the example of a medical test for diagnosing a disease. Specificity of a test is the proportion of healthy patients known not to have the disease, who will test negative for it. Mathematically, this can also be written as:
Both Sensitivity and Specificity depend on the “Cutoff” value of a given test:
- For example, Raising the Cutoff value makes it More Difficult to Detect a condition [e.g. colorectal neoplasm] as MORE hemoglobin must be present in the stool for the test to be Positive. By Raising the Cutoff value, it is Harder to obtain a Positive Test & Easier to obtain a Negative Test.
- This will cause the number of False Negative “FN” to Increase & the Number to True Positives “TP” to decrease, leading to Decrease Sensitivity.
- The change is Cutoff value will also result in an Increased Number of True Negatives “TN” & Decrease number of False Positives ”FP” causing an Increase in Specificity.
NOTE: There is always a trade-off between Sensitivity and Specificity of a Diagnostic Test. Typically as Sensitivity Increases the Specificity Decreases & Vice-versa.
References for Sensitivity and Specificity Diagnostic Tests: