As communities ramp up testing for COVID-19, an interactive calculator is available to help health care providers better visualize the test accuracy but there are limitations, says an infectious disease expert says.
The tool, published in May 2020 by the British Medical Journal (BMJ), illustrates the performance of a COVID-19 test based on pre-test probability, sensitivity, and specificity.
Pre-test probability is described as the likelihood a person has COVID-19 based on their symptoms. Test sensitivity is the proportion of patients with COVID-19 who have tested positive. Test specificity is the proportion of patients without COVID-19 who tested negative. As the pre-test probability, sensitivity, and specificity values are adjusted, the positive and negative rates vary and are illustrated in graphic form.
COVID-19 test results have been getting renewed attention since infection rates around the globe have surged. Although the tool is interesting, it has limitations, says Erwin Haas, M.D., an infectious disease expert and policy advisor to The Heartland Institute, which co-publishes Health Care News.
“For me, the graphic display of what is basically a spreadsheet, is like a computer game to a bored adolescent,” said Haas. “I keep playing with it, inputting the values I assumed when I wrote about this in the American Thinker (July 21). It’s a fascinating intellectual exercise.”
Haas calculated earlier that two to four million positive results were likely false. With the calculator, Haas says the number was more likely 1.3 million.
When one decreases the percentage of pre-test probability in the BMJ calculator—which illustrates what happens when state health officials increase randomized, mass testing in a largely healthy population—the calculator shows a significant increase in the number of false positives.
Up to 90 percent of the positive cases identified by PCR testing in Massachusetts, Nevada, and New York were false positives, the New York Times reported on August 29.
A big weakness of the tool is that it almost completely ignores the possibility of false positives test results, says Haas.
“It assumes a high frequency of positives because in May tests were done on clinically sick patients (who accurately were positive) and does not emphasize the Bayesian insight that testing for low-frequency diseases can be fraught (with error),” said Haas.
A growing number of studies are indicating a need to focus on false positives and their effect on response efforts to the pandemic.
The Lancet Respiratory Medicine Journal on September 29 published a study showing as countries increase testing among asymptomatic individuals, the pre-test probability rate, as described above, proportionally decreases.
The authors note that the prevalence of COVID-19—how common the disease is in a specified at-risk population at a particular point in time—is continuously changing. This value directly affects the pre-test probability a person has the disease. The authors conclude large-volume screening during a time of low prevalence could do more harm than good by prompting authorities and other organizations to implement restrictions that cause health, financial, psychological, and social damage.
“[The BMJ calculator] may not be very useful for patients to read, as it’s technical,” said Haas. “However, it might alert better-trained and academic clinicians to the mistakes that ‘testing, testing, testing’ can generate.”
Richard Larkin McLay (email@example.com) writes from Minneapolis, Minnesota.
Jessica Watson, Penny Whiting, John Brush, “Interpreting a Covid-19 Test Result,” BMJ, May 12, 2020: https://www.bmj.com/content/369/bmj.m1808
Elena Surkova, Vladyslav Nikolayevskyy, Francis Drobniewski, “False-positive COVID-19 Results: Hidden Problems and Costs,” The Lancet Respiratory Medical Journal, September 29, 2020: https://www.bmj.com/content/369/bmj.m1808