Why does aid hate critics, while medicine appreciates them?
Two stories ran today in the New York Times that showed the important role of critics in medicine. In the first, medical researchers found that the usual methods screening for prostate and breast cancer was not as effective as previously advertised. Screening successfully identifies small tumors and the rate of operating to remove such tumors has skyrocketed. But the screening regimen has failed to make much of a dent in the prevalence of large prostate and breast tumors, so their preventative value is not as great as previously thought. Many other researchers had already pointed out that there is no evidence that the relatively new PSA prostate screening test has reduced prostate cancer deaths (a message that failed to make it to my own doctor, who tells me I am definitely OK once the PSA comes back normal). To make things even worse, some of the operations on small tumors were unnecessary and even harmful: “They are finding cancers that do not need to be found because they would never spread and kill or even be noticed if left alone.” The American Cancer Society concluded that too much emphasis on screening “can come with a real risk of overtreating many small cancers while missing cancers that are deadly.”
In the second story, earlier reports of positive results of an AIDS vaccine trial are coming under more and more doubt. The issue is one very familiar to any statistical researcher – did the apparently positive results from the vaccine trial come from random fluctuations in noisy data, or were the positive outcomes definitely more than could have happened by chance? We have the arcane concept of “statistical significance” to answer this. The NYT ran a story a month ago on the same vaccine trial that suggested definite positive outcomes (“statistically significant”), while today’s story features critics of the original trial results who fear the results were just due to random noise (“not statistically significant.”)
Suppose these critics were operating in the aid world. Aid defenders would accuse the critics of not being constructive – these studies were 100 percent negative (so what’s your plan for eliminating prostate cancer deaths, you fancy-pants researcher, if you don’t like ours?) They would accuse them of hurting the cause of financing cancer and AIDS treatment. The attacks on the critics might even get personal.
If this were the aid world, the mainstreamers would dismiss the arguments over statistical significance as some obscure academic quarrel that needn’t concern them. How do I know this? I have criticized Paul Collier on numerous occasions for failing to establish statistical significance for many of his aid & military intervention results. I have argued that he is doing “data mining,” which is pretty much the equivalent of producing lots of results on the AIDS vaccine and reporting only the positive results. But I have yet to find anyone who cares about these critiques – on the contrary the whole American and British armies seem to base their strategies on Collier’s statistical results. In contrast, it’s almost comical to see the heroic lengths to which the writer Donald McNeil Jr. goes in the latest NYT AIDS vaccine story to explain statistical significance to NYT readers. He is saying, hey you really have to get this if you want to know: Did the vaccine in the trial Work -- or -- Not.
The other feature of both stories is that both throw doubt on excessive confidence in simple panaceas – screening and vaccines. They suggest reality is more complex and that we need to think of new ways of attacking difficult problems like cancer and AIDS. If you are familiar with the aid world, you will know the analogy is exact to how we discuss solving difficult problems like poverty.
So why does medicine welcome critics and aid hates them? Perhaps us aid critics are just not as good as the medical critics. Or perhaps it is because we care so much more whether medicine really works than whether aid or military intervention really works?