This week, after a major government report, we heard that one murder a week is committed by someone with psychiatric problems. Psychiatrists should do better, the newspapers told us, and prevent more of these murders.
It’s great to want to reduce psychiatric violence. It’s great to have a public debate about the ethics of preventive detention (for psychiatric patients and other potential risk groups, perhaps). Before you career off and have that vital conversation, you need to understand the maths of predicting very rare events.
Let’s take the very concrete example of the HIV test. The figures here are ballpark, for illustration only. So: what do we measure about a test? Statisticians would say the HIV blood test has a very high “sensitivity” of 0.999. That means that if you do have the virus, there is a 99.9% chance that the blood test will be positive. Statisticians would also say the test has a high “specificity” of 0.9999 - so if a man is not infected, there is a 99.99% chance that the test will be negative. What a smashing blood test.
But if you look at it from the perspective of the person being tested, the maths gets slightly counterintuitive. Because weirdly, the meaning, the predictive value, of a positive or negative test that an individual gets, is changed in different situations, depending on the background rarity of the event that the test is trying to detect. The rarer the event in your population, the worse the very same test becomes.
Let’s say the HIV infection rate amongst high risk men in a particular area is 1.5%. We use our excellent blood test on 10,000 of these men and we can expect 151 positive blood results overall: 150 will be our truly HIV positive men, who will get true positive blood tests; and one will be the one false positive we could expect, from having 10,000 HIV negative men being tested with a test that is wrong one time in 10,000. So, if you get a positive HIV blood test result, in these circumstances your chances of being truly HIV positive are 150 out of 151. It’s a highly predictive test.
But now let’s use the same test where the background HIV infection rate in the population is about one in 10,000. If we test 10,000 people, we can expect two positive blood results overall. One from the person who really is HIV positive; and then one false positive that we could expect, again, from having 10,000 HIV negative men being tested with a test that is wrong one time in 10,000.
Suddenly, when the background rate of an event is rare, even our previously brilliant blood test becomes a bit rubbish. For the two men with a positive HIV blood test result, in this population where one in 10,000 have HIV it’s only 50:50 odds on whether you really are HIV positive.
Now let’s look at violence. The best predictive tool for psychiatric violence has a “sensitivity” of 0.75, and a “specificity” of 0.75. Accuracy is tougher, predicting an event in humans, with human minds, and changing human lives. Let’s say 5% of patients seen by a community mental health team will be involved in a violent event in a year. Using the same maths as we did for the HIV tests, your “0.75″ predictive tool would be wrong 86 times out of 100. For serious violence, occurring at 1% a year, with our best “0.75″ tool, you inaccurately finger your potential perpetrator 97 times out of a hundred. Will you preventively detain 97 people to prevent three events? And for murder, the extremely rare crime in question, occurring at one in 10,000 a year among patients with psychosis? The false positive rate is so high that the best test is almost entirely useless. I’m just giving you the maths on rare events. What you do with it is a matter for you.
Bookmarks