A balanced reaction to "The Civil War in Development Economics"
Aid Watch received the following very thoughtful comment. The author wishes to remain anonymous: The debate in the academic world sounds fascinating! And it mirrors in some ways the ongoing debates I have within the international development practitioner community, where I work. Due to my background and current job, I'm the resident RCT "expert" of sorts in my organization and get to have lots of fascinating discussions with program and M&E staff. I see the following pros and cons for randomized evaluation (or RCT's - randomized control trials - as they are often called in the NGO world):
PROS:
- As always, the key idea that you can't attribute causality of impact without a randomly-assigned control group. Selection bias and other problems affect any other method to varying degrees.
CONS (or rather, arguments for having additional approaches in your evaluator's toolbox):
- RCT's are harder to do for long-run impacts. You either have to leave the control group without the program for 10-20 years, which is an ethical and logistical challenge. Or you have to rely on some assumptions to add effects together from repeated follow-up surveys. For example if you delayed the start of a program in the "control group" for three years and then did a follow-up survey every three years, then you could add the difference between 3 and 0 years plus the difference between 6 and 3 years plus the difference between 9 and 6 years, etc, but you'd have to assume some stuff like linearity in the effect over time or specific types of interactions with global on-off events? (I'm still thinking about this whole idea.)
- With a complex or system-wide program, you often can't have a control group, such as if you are working on a national scale. For example, working to change gender injustices in a country's laws. - Context is important and you can't always get that with good background research or a good pilot before an RCT, though you should try. My organization talks a lot about "mixed methods" - mixed quantitative and qualitative research being a good way to combine the strengths of each. In fact the RCT that I'm overseeing includes a team of anthropologists. - Qualitative research can also be more responsive if you get unanticipated results that are hard to explain.
So, being a good two-handed economist, I do see both sides now, though I'm still pro-RCT. It helps that I was at that bastion of qualitative methodology, the American Evaluation Association conference (another AEA!) and heard some good indoctrination on the anti-RCT side.
It's particularly interesting to be at my INGO since much of the organization's work is focused on areas that are tough to evaluate with RCT's including lobbying the U.S. govt; humanitarian relief work (though we have a few staff who want baselines for refugee camps); and many small-scale, long-term, idiosyncratic projects in communities facing severe challenges.
The closest I've come to agreement with people who are anti-RCT is to have all of us agree that it's a great tool in the right circumstances but that it's one of many good tools. What we always disagree on is whether RCT's are overused (them) or underused (me). And many people hate the words "gold standard". It's a red flag. I use it anyway, as in "RCT's are the gold standard for short-run impact evaluations that you want to be free from selection bias."
I think that the "right circumstances" for RCT's would include important development approaches such as clean water or microcredit that haven't been evaluated yet with RCT's; or big programs that are finally stable in their implementation after an initial period of experimentation and adaptation. Pilots are OK, too, though that is a harder sell; program staff want to be able to get in there and experiment away with what works and what doesn't without worrying about rigorous evaluation.
It'll be interesting to see where these discussions are in 5 or 10 years.