Testing for COVID-19 Infection

Accurate testing for SARS-CoV-2 infection, by which we mean testing with few false positives as well as few false negatives, is important not only for clinical management of individual cases but for epidemiological case tracing, limiting spread of infection and informing public health strategies. In the latter situations, tests that are rapid, cheap and easy to perform are particularly desirable.

Two main forms of testing for SARS-CoV-2 infection are in use.

  • Antigen testing, which includes the self administered immunochromatographic lateral flow test (LFT), detects viral coat material and is developed by raising a specific antibody against the antigen target. It therefore measures active production of the viral protein that constitutes the antigen. Developing such a test requires knowledge of how the viral behaves in the host and is reliant upon generating a sensitive and specific antibody to be used for the test.
  • Molecular tests, such as polymerase chain reaction (PCR), loop-mediated isothermal amplification (LAMP) and clustered regularly interspaced short palindromic repeats (CRISPR) tests amplify viral RNA material. These tests are potentially specific for different variant mutations of SARS-CoV-2. The gold standard test is considered to be lab based reverse transcription viral PCR (RTPCR). Rapid testing, such as the LAMP test, skips some of the time-consuming stages of formal PCR and therefore is useful for screening and epidemiology.

Many different studies have reported on the performance of different brands of rapid antigen and molecular test. This article discusses a Cochrane review of diagnostic test accuracy that collected data from 64 studies that investigated 16 different antigen tests and five different rapid molecular tests. In all there were 24087 samples. Of these, 7415 were found on the subsequent gold standard RTPCR test to be positive. No study actually included clinical diagnosis of infection as a standard or criterion of infection.

Antigen tests had a sensitivity in symptomatic cases of 72% (95% CI 63.7 to 79%). The sensitivity was higher in the first week of infection (78.3%) than in the second week (50%). The overall specificity was 99.6% (in both symptomatic and asymptomatic – obviously if symptomatic one wonders what they actually had if we are considering false positives.

Analysing a different way, the positive predictive value (PPV) in comparison was only 84-90% using the best sensitivity brands at 5% population prevalence and at 0.5% prevalence in asymptomatic people the PPV was only 11-28%!

Molecular tests had an average sensitivity of 73% (66.8 to 78.4%) and specificity was 97.2 to 99.7% depending on brand. In most trials the samples were collected in labs rather than in real life conditions. There are no data about symptoms.

For reference, WHO considers >80% sensitivity and >97% specificity an appropriate test as a replacement for a lab test.

The authors note that in low prevalence settings the dramatically reduced PPV value means that confirmatory tests may be required and that data are required for the settings in which tests are intended to be used as well as on user dependence (i.e self testing)

Journal Club Conclusions

Why the huge difference between positive predictive value and specificity?

Specificity = 1 minus false positive rate (FPR)

FPR = false positives/all negative cases (i.e. true negatives and false positives)

So high specificity means false positives is low compared to true negatives

PPV = true positives/all positive results (i.e. true positive and false positives

So high PPV means false positives low compared to true positives

The difference between specificity and PPV is essentially that sensitivity is in relation to the total number of actual cases while PPV is in relation to the total number of positive test results. PPV could be much worse than specificity if the true negatives much more common that true positives. This would happen if infection rates are very low in the population.

Is it disingenuous to quote specificity based on trials where infection rates were 31% when in the real world the infection rates are perhaps two orders of magnitude lower than that? If someone wanted to argue that they were being denied going to work, going to school or going on holiday based on a test where the predictive value was only 11-28%, would they have a good case? Is it worthless as a screening tool? If the policy is to go on to a molecular test if the result is positive, is this also invalid if the PPV of the molecular test is similarly low?

Clearly, the use of tests should be tailored to the information that can be provided. In an outbreak of high prevalence, one wants a sensitive test to pick up people who might have infection after contact tracing. One could have a specific screen and lab test only those negative to make completely sure. The priority is not to miss positive cases.

If medicolegally one wants to prove workers were not source of a case or outbreak, when the prevalence of infection is low, one may as well go straight to lab testing as there are too many false positives in such a situation.

In a situation where someone wants to question a positive result, it is not clear that rapid molecular testing is superior to antigen testing, and a lab based PCR may again be necessary. as specificity not clearly higher than some molecular tests.

There are also biological as well as statistical issues. For example, antigen tests may theoretically have false positives if the nature of the generated antibody to a viral coat antigen is not clear. If the trial was done in the summer time with no winter flu or coronavirus common colds in a setting where one in three subjects have COVID and none has any other type of infection, is the generated antibody really shown to be specific for SARS-CoV-2? The same may apply to demographic factors, such as expecting the test to have the same performance in children or in care homes as in a healthy adult test population.

On the other hand, a molecular test may correlate better with actual infectivity than a bit of residual RNA and therefore be more biologically useful for epidemiological control.

Finally, in extremely high prevalence trial populations where there is no actual clinical corroboration, the absolute reliance upon lab PCR as a gold standard may be of concern.

About dulcetware

designing educational software applications
This entry was posted in Infectious Diseases and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s