Taso and Megan Stefanidis of Manalapan, New Jersey, developed symptoms of COVID-19 a few days after gathering with their families for Christmas.
Other family members ended up testing positive for the disease. Megan also tested positive -- but Taso tested negative.
They were surprised by the negative result, especially since it came from the gold-standard PCR test, which has been hailed for its ability to detect even remnant viral genetic material.
Taso subsequently took a rapid antigen test, which turned up positive: "I knew I had it," he said.
False negatives with PCR testing are actually far more common than one might expect. Daniel Rhoads, MD, vice chair of the College of American Pathologists microbiology committee who is also at the Cleveland Clinic, said PCR sensitivity for detecting COVID-19 is actually around 80%.
That means "one in five people would be expected to test negative even if they have COVID," Rhoads told ֱ.
And that's a "best-case scenario" for sensitivity, he added, noting that it's likely to be different in a real-world scenario.
NPV, PPV, and the Real World
For his estimate, Rhoads cited data from a published in the Annals of Internal Medicine by researchers from Johns Hopkins and the National Institute of Allergy and Infectious Diseases. He said it's not likely that this sensitivity figure would be vastly different now.
Conversely, it's "reasonable to hypothesize that if shedding of virus is shorter duration or lower quantity in those with immunity, then it could decrease the sensitivity of tests," he added.
Disease prevalence also plays a role in the accuracy of test results. Although the sensitivity and specificity of a test remains static over time, the prevalence of disease in the community will impact the predictive value of a positive or negative result. Factoring prevalence into the mix determines positive predictive value (PPV) and negative predictive value (NPV).
"Sensitivity and specificity live in an ideal world," Gary Cornell, PhD, a retired mathematician who , told ֱ. "PPV and NPV live in the real world."
Rhoads likened the sensitivity and specificity reported in a package insert of a test to the "miles per gallon" sticker advertised on a new car. "Everybody recognizes that the number is true but probably determined in an idealized setting that is not likely to be achieved in real life," he said.
As disease prevalence increases, PPV increases because there will be fewer false positives -- but NPV decreases because there will be , Rhoads said.
"If everybody you know has COVID, you have symptoms of COVID, you test negative, the prevalence in your community is high, your clinical symptoms align with the disease ... you're at a high likelihood of having a falsely negative result," Rhoads explained, noting that the pre-test probability of disease is high.
On the other hand, if screening for influenza in the summer when disease prevalence is essentially zero, a positive result would be highly suspicious, he added. The post-test probability of having influenza would be low, and would need to be taken into account.
What Else Can Go Wrong?
There's no doubt that PCR testing is the best tool we have for detecting COVID-19, but "I'm comfortable recognizing that no test is perfect, and some tests are less perfect than others," Rhoads said.
"With respiratory viruses, we know PCR is the best laboratory test that we've been able to find, so we lean on that as our reference standard," he continued.
What else can impact the accuracy of tests? For starters, Rhoads said, "you have to get the virus in the sample." That's not just about good sample collection in terms of swabbing long enough, but also the renewed debate over whether oral versus nasal swabbing is better with Omicron. "It's a reasonable question, but I haven't seen good evidence yet" in favor of oral swabs, he noted.
Taso Stefanidis said that his PCR test involved self-swabbing. "In the beginning, they would shove it up your nose," he said. "Now they give you the swab and tell you to shove it up there so it's a little uncomfortable. I thought I did it right, but maybe I didn't."
Lester Layfield, MD, a pathologist with the University of Missouri Health Care, has studied ways to . He's found that readouts generated by computers could be erroneous, but adding in human judgment can fix misinterpretations of positivity based on cycle thresholds, for instance.
Other issues include cross-contamination of sample wells inside the analyzer (a positive sample could spill over into a negative sample, for example), or a technologist could insert the wrong sample into the wrong well, according to Layfield.
He said it's fair to extrapolate his findings to false negatives as well, with some exceptions.
"You can have problems due to the technologist putting somebody else's negative sample in your well, and therefore you're a false negative," he explained. "Or it can be that the the cycling didn't actually work."
But "cross-contamination is not a problem there," he added.
Rhoads warned that antigen tests -- which the Biden administration began shipping to homes for free this week -- have even lower sensitivity, and highly suspicious negative results should be interpreted with extreme caution.
"If somebody is symptomatic, and there's a high prevalence of disease in the community, and they test negative on antigen testing, they need to be skeptical about that and consider taking another test," he said.
If it's positive, he noted, there's a very high likelihood that it is indeed a true positive, as these tests have upwards of 99% specificity.