Don’t panic: your positive test for AIDS is probably wrong.
The business plans of casinos, be they tribal or Las Vegas, black tie Monte Carlo affairs from a James Bond film or an operation hidden in the back room of a tattoo parlor, all have a common element. It is your inability to estimate probabilities correctly.
You, along with the rest of the human race, start with very bad intuition about probability.
This expresses itself as a pretty strong chance of overestimating your chance of winning, which means the house has a pretty strong chance of making a profit. The scientific study of probability was started by Pierre Fermat and Blaise Pascal in the 17th century, in order to understand the odds that arise in different types of gambling. In this article we are going to look at a simple application of probability to health care. This particular example, in various forms, has been done over and over and so I apologize if its already familiar to you, but there are good reasons to keep repeating it. Some critical people are still not aware.
Assume that about one person in one thousand has AIDS and that the initial test used to check for AIDS is 99% accurate. These are not the real values, which change every year, but they do make calculations in the example easier to follow. A patient, having seen a bunch of scary stories about AIDS, decides to have themselves tested. They have a test done and the results come back as positive. What, then, is the chance that they have AIDS?
Picture 100,000 people who have an AIDS test done, all over the country. Among these people are about 100 that have AIDS (one in a thousand) and the test results are correct 99% of the time. Of the people that have AIDS, 99 of them get a positive result and 1 of them gets a negative result. Among the 99,900 people who do not have AIDS, the test is wrong one time in 100 so 999 of these AIDS-free people get an incorrect positive result and the remaining 98901 get a correct negative result. Here’s where interpreting the positive results gets tricky.
We know the patient we are tracking got a positive result. That means the patient in not just one of the 100,000 people in our example. This patient is one of the people who got a positive result. This means that he is one of a much smaller group of 99+999=1098 people that got a positive result. Now to compute a probability, which is always a number between 0 and 1, you divide the number of outcomes you’re interested in by the total number of outcomes you looked at. That makes the chance that you have AIDS given that you have a positive result is:
99/1098=0.0901 or about 9%
With a positive result using a 99% accurate test the chance that the patient’s positive result is correct is only 9%. Weird, huh?
It is possible to diagram this sort of situation as a tree in which each split represents one type of information. The tree below splits first on has/does not have the HIV virus and then on correct/incorrect test results. This gives us four sorts of people – represented by the lowest boxes in the tree. Only two of these groups got positive test results.
What’s going on here is that the rarity of AIDS makes false positives more common that true positives. In practice, if you get a positive result a second test is performed. The second test is far more accurate (and far more expensive).
In general any positive test for a rare condition has a good chance of being wrong. Something that makes this worse is that many physicians are not aware of this situation, as well as several other simple applications of probability, to the interpretation of medical test results.
A study in the Oxford University Press Journal QJM in 2003 noted that the rise of what is called evidence-based medicine requires that medical students and physicians be able to work with probability in a facile and transparent manner. The paper reported a study that tested the ability of medical professionals to compute probabilities relevant to medical decision making and found them badly wanting. Put this in the context of our example: doctors may leave a patient thinking they have a much higher chance of having AIDS than they really do until the second test comes in. This is not a trivial matter– in some cases it can lead to suicide.
I picked this example because I’ve run into it several times in my life. A friend or student would get a positive test for a fairly rare disease with very bad health outcomes, dark social consequences, or both. They would then panic. It is really hard to lead a panicking person through probability calculations, so I’m hoping the following rule of thumb will catch on. A positive test for a rare condition is more likely to be wrong than right. In general, you want a second test or other evidence, that confirms the condition.
Its always worth presenting information in more than one way, because different people understand in different ways. The picture above is a Venn diagram. The big rectangle represents everybody. The small circle represents people with HIV, the large one people with a positive test. The most common type of result is colored green: correct negative test. The next most common result is an incorrect positive tests, the orange part of the diagram. The 99 correct positive results are colored red, and the one incorrect negative result is colored black. The diagram is also labeled with the number of people that got each type of result.
Another issue that sort of jumps out of the example is the one guy with HIV in his blood that got a negative result. The good news is that this is the rarest possible outcome. The bad news is we have someone with HIV that is medically certified as being HIV-negative. This is another place where people’s understanding of probability falls down on the job. A negative test does not mean a person is
clean. It means there are very good odds they are clean. The take home message here is that safe sex is a good idea no matter who got what test result.
I hope I see you here again.
Department of Mathematics and Statistics
University of Guelph, Ontario, Canada