The coming age of accountability

There was such a great audience yesterday at the Brookings event on What Works in Development. (If you are a glutton for punishment, the full length audio of the event is available on the Brookings web site.) In the end, what struck me was the passion for just having SOME way to KNOW that aid is benefiting the poor, which dwarfed the smaller issue of Randomized Experiment methods vs. other methods.

And extreme dissatisfaction with aid agencies who ignore even the most obvious signs that some aid effort is not working. (Example cited in the Brookings book: a World Bank computer kiosk program in India celebrated as a "success" in its "Empowerment" sourcebook. Except that the computers sat in places without functioning electricty or Internet connections. Critics pointed that out, and yet officials still defended the program as contributing to "Empowerment." Abhijit Banerjee asked "empowered how ? through non-working computers?")

It is awfully hard to get an accountabilty movement going that would have enough political power to force changes on aid agencies, say, away from forever mumbling "empowerment," towards actually making computers light up.

Accountability is not something that anyone accepts voluntarily. It is forced on political actors by sheer political power from below. That's what democratic revolutions are all about. Can we hear a lot more from people in the poor countries protesting bad aid (thank you, Dambisa Moyo) and praising good aid (thank you, Mohammed Yunus)? Can we hear A LOT more from the intended beneficiaries themselves? Can their allies in rich countries help them gain more political leverage with rich country aid agencies?

I don't know yet. But there is a lot more awareness of the accountability problem then there was a decade ago. The dialogues of the blogs on making Haiti disaster aid work is one example.  The size, savvy, and enthusiasm of the audience yesterday was one more small hopeful sign.

Watch out, aid agencies, accountability is coming.

Read More & Discuss

“What works in development?” Apparently not markets for books on “What works in development”

A previous blog highlighted the book Jessica Cohen and I edited "What works in development" (self-promotion disclaimer: I was just an organizer; the attractions were stellar academics heatedly debating the pros and cons of Randomized Experiments in development). We got a nice response from readers (the blog post was the 2nd most popular on Aid Watch since we launched the new site October 14th), and many seemed to want the book. However, for markets to work, we need not only demanders but also rational suppliers. Here something has gone wrong, not quite sure what. After the book jumped up the ranks at Amazon and Barnes & Noble, it sold out at both sites and is indefinitely out of stock.

As for the enigmatic publisher, Brookings Institution Press, finding the book on its web site is a bit of a challenge. It is not in the “New Releases” section, which has 6 books published in 2008. It is not in the section on “Poverty and Development. ” You would have more luck looking at the Brookings Global Economy and Development Program website, which features the book, but that requires a bit of inside knowledge.

To save you endless searching, I spent half the day tracking it down for you. The Brookings link is here. Maybe this will help Amazon find the book from Brookings as well.

Sorry for whining, but when you have worked hard at facilitating something that the customers seem to want, it’s a bit frustrating to having recalcitrant suppliers get in the way.

After all this, you can actually order the book online from the Brookings Institution Press, allowing for 1 to 2 weeks delivery.  Otherwise, you could get Sarah Palin’s insights on What Works from Amazon, delivered tomorrow.

Read More & Discuss

The Civil War in Development Economics

what_works_in_developmentFew people outside academia realize how badly Randomized Evaluation has polarized academic development economists for and against. My little debate with Sachs seems like gentle whispers by comparison. Want to understand what’s got some so upset and others true believers? A conference volume has just come out from Brookings. At first glance, this is your typical sleepy conference volume, currently ranked on Amazon at #201,635.

But attendees at that conference realized that it was a major showdown between the two sides, and now the volume lays out in plain view the case for the prosecution and the case for the defense of Randomized Evaluation.

OK, self-promotion confession, I am one of the editors of the volume, and was one of the organizers of the conference (both with Jessica Cohen). But the stars of the volume are the speakers and commentators: Nava Ashraf (Harvard Business School), Abhijit Banerjee (MIT), Nancy Birdsall (Center for Global Development), Anne Case (Princeton University), Alaka Halla (Innovations for Poverty Action), Ricardo Hausman (Harvard University), Simon Johnson (MIT), Peter Klenow (Stanford University), Michael Kremer (Harvard), Ross Levine (Brown University), Sendhil Mullainathan (Harvard), Ben Olken (MIT), Lant Pritchett (Harvard), Martin Ravallion (World Bank), Dani Rodrik (Harvard), Paul Romer (Stanford University), and David Weil (Brown). Angus Deaton also gave a major luncheon talk at the conference, which was already committed for publication elsewhere. A previous blog discussed his paper.

Here’s an imagined dialogue between the two sides on Randomized Evaluation (RE) based on this book:

FOR: Amazing RE power lets us identify causal effect of project treatment on the treated.

AGAINST: Congrats on finding the effect on a few hundred people under particular circumstances, too bad it doesn’t apply anywhere else.

FOR: No problem, we can replicate RE to make sure effect applies elsewhere.

AGAINST: Like that’s going to happen. Since when is there any academic incentive to replicate already published results? And how do you ever know when you have enough replications of the right kind? You can’t EVER make a generic “X works” statement for any development intervention X. Why don’t you try some theory about why things work?

FOR: We are now moving in the direction of using RE to test theory about why people behave the way they do.

AGAINST: I think we might be converging on that one. But your advertising has not yet got the message, like the JPAL ad on “best buys on the Millennium Development Goals.”

FOR: Well, at least it’s better than your crappy macro regressions that never resolve what causes what, and where even the correlations are suspect because of data mining.

AGAINST: OK, you drew some blood with that one. But you are not so holy on data mining either, because you can pick and choose after the research is finished whatever sub-samples give you results, and there is also publication bias that shows positive results but not zero results.

FOR: OK we admit we shouldn’t do that, and we should enter all REs into a registry including those with no results.

AGAINST: Good luck with that. By the way, even if do you show something “works,” is that enough to get it adopted by politicians and implemented by bureaucrats?

FOR: But voters will want to support politicians who do things that work based on rigorous evidence.

AGAINST: Now you seem naïve about voters as well as politicians. Please be clear: do RE-guided economists know something the local people do not know, or do they have different values on what is good for them? What about tacit knowledge that cannot be tested by RE? Why has RE hardly ever been used for policymaking in developed countries?

FOR: You can take as many potshots as you want, at the end we are producing solid evidence that convinces many people involved in aid.

AGAINST: Well, at least we agree on the on the much larger question of what is not respectable evidence, namely, most of what is currently relied on in development policy discussions. Compared to the evidence-free majority, what unites us is larger than what divides us.

Read More & Discuss