Friday, May 22, 2020

Say you test positive (or negative), what then?

Positive Test Then What?

If I test positive, how sure am I that it is a true positive? I write this post in the context of SARS-CoV-2, but the conclusions are exactly as relevant for pregnancy tests or any other detection problems.

I am writing this post, because I keep hearing and reading wrong conclusions everywhere, regarding SARS-CoV-2 tests. Interpreting test results correctly is crucial both when taking decisions about one's health and when discussing public health policies. I am going to try to explain it as simply as I can.

As with any detection problem, the statistics are subtle and unintuitive but not so difficult.
I am not an expert in biology, so I when I talk about that part, take it with a grain of salt.
In order to understand what the results of the test mean, it is important to understand what it measures. I am concerned in this post with the mathematics of certainty.
I have studied statistics, detection and estimation theory in depth, and that part, I do understand well.

Terminology: Measuring How Good a Test Is

There are four numbers which are important for a test: PPV, NPV, sensitivity and specificity.
How sure I am that a positive is true, is measured by the PPV (conversely NPV for a negative).
These two numbers depend on circumstances and need to be estimated.

How good a test is measured by sensitivity (specificity for negatives), the quotient between true positives and what should have been a positive (true positives + false negatives).

How Do I Interpret My Test Results?

I run a test to check a hypothesis. For example, I take a test for Covid-19. It may be a qPCR to see if I am infectious or IgG Elisa to see if I already had the disease.

Let's say I am in the second situation, I am trying to see if I may be immune (or at least if I have developed some antibodies, there are still some questions about how immunity works for this disease). I look at the test and it has a specificity of 95.0 % and a sensitivity of 80.0 %. This means that if I test 100 negative people, I would get 5.0 false positives on average (obtained with: $100\frac{95.0}{95.0+5.0}=95.0\%$). If I test 100 positive people, I would get an average of 20 false negatives ($100\frac{80}{80+20}=80.0\%$). This looks quite good compared with many of the available tests. False positives are worse than false negatives here, we are talking about immunity.

Assume I have not had any symptoms and the seroprevalence where I live, the number of people who had the disease and would test positive with an ideal perfect test, is of 5%. If I test positive, how sure am I that I actually had the disease and so I may be immune? Sure around 90% right? No, this is completely wrong and is what this post is about.

Before taking any conclusions that you are immune or before discussing the change in any public policies (immune passports, seroprevalence studies), be sure you understand the issues involved.

In the above example, I have 50% chances of being a true positive. What? Why?

Step by Step - Testing the whole population

So, step by step, let's start with a 100 people of which 5% is infected. That means 5 out of a 100 have been infected. Let's draw them (I have chosen a population of a 100 to make numbers
easy, but any conclusions can be extrapolated to a population of any size).

1) Population: 5 with antibodies out of a 100



Then we test them. We obtain 1 false negative (the odds are one false negative now and then, there could be none, the results are similar) and 4 false positives. We are bound to have some false positives because even if the test is very good, there are a lot of true negatives and some
of the tests are bound to fail.


2) Test results for the population (95% specificity, 80% sensitivity):

4 true positives, 91 true negatives,

4 false positives, 1 false negative

$$0.95\approx\frac{91}{4+91}$$
$$0.80\approx\frac{4}{4+1}$$





In our example, we took the test and got a positive, so we are either a false positive or a true positive. Now let's examine just the people who tested positive.

3) Subset who tested positive


Half of them are wrong!
The parameters of the test are just half of the story. When I am buying the test, I am interested in the specificity and sensitivity which measure how accurate the test is. However, in order to interpret the test, I need to see the prevalence in the measured population or, in other words, the probability that I had the disease in the first place. If the probability that I had the disease is too low, no matter how good the test (no test is perfect), the results are going to be disappointing.

If I can decrease the green part above, I can be more certain of the result. If I had symptoms, for example, things are very different.
Let's see what happens.

Step by Step - Looking only at people who got symptoms

For this example, we are going to assume that 7% of the population got symptoms.

Of course, looking at people who presented symptoms is just one way to adjust the prior probability, we can also look at people who were in close contact with infected and other things.



1) Population: 7 people who presented symptoms




2) Test results for the 7 people who presented symptoms

(95% specificity, 80% sensitivity)

4 true positives, 2 true negatives

0 false positives, 1 false negative


When look at the subset of the tested population that presented some symptoms at some point in the last month, the story is completely different:

3) Subset who tested positive


 Almost 100% of the positives are true positives (we have lost some data because the population is small and we cannot have half a person, but with a bigger population the results are very good nonetheless, around 96%).
We don't need to test only the people with symptoms, but our knowledge of someone having symptoms or being in contact with someone who has been infected or any other knowledge, can color our conclusions.

In conclusion...



One of the reasons (the other being bad incentives for contagion and discrimination) against an immune passport is that when the prevalence of the disease is low, a positive test does not carry a whole lot of certainty with it, without other information, and people are incentivized to lie.
Another problem is the false sense of security that a false positive conveys. People may believe they are immune (which may not be true even with a true positive, unless they are tested for neutralizing antibodies) and then may feel they are protected and put themselves and their community at risk.

And What If I Get a Second Test?

A second test may help, or not. It depends on the reason I may have got a false positive. Some tests measure for antibodies and there is cross-reactivity with antibodies for other viruses. For example, in SARS-CoV-2, there may be cross-reactivity with some cold viruses (which are also coronaviruses).
If I got the positive because of that, a second test will not help. Understanding how close is a false positive from a random event and how close it is from a repeatable event is important when thinking about taking a second test and interpreting the result. Taking a different test which looks for different antibodies may help, for example. In the event of the two tests being statistically independent (meaning  that if I get a false positive in the first one, I will not be able to predict the same mistake in the second one) two tests will help and can actually confirm the result. This is called orthogonal testing in medical parlance. In the case above, if both tests are equally good, it will take the result up to 75% certainty because the probability of making a mistake twice is the multiplication of both probabilities.
If they are not independent, like in the cross-reactivity example, taking the second test, may not add any information at all.

Probability: Bayes' Theorem

The theory behind all this is Bayes' Theorem and the starting probabilities conditioning the results
are called priors or prior probabilities. To know more about this, you can study bayesian inference.


Thanks to Elisa for her collaboration in this post. The mistakes are all mine.


No comments:

Post a Comment