Give Better

We are an analytical people.  A whole industry of charity evaluators has taken root to help individuals and foundations suss out effective projects.  Non-profit development officers spend hours keeping up with Charity Navigator, Guidestar, and the Better Business Bureau.  Now Givewell enters the scene to offer rigorous research and analysis so donors don’t have to.

The Three Cups controversy coincided with widespread discussion of Dean Karlan and Jacob Appel’s well-reviewed book, More Than Good Intentions: How a New Economics Is Helping to Solve Global Poverty.  Their research supports two main conclusions: (1) understand poverty, and (2) be rigorous in your analysis.

I agree with Karlan and Appel’s conclusions and would like to offer these points to those advocating for greater evaluation:

1.  We evaluate programs through our worldview, which may or may not lead to long term social change.

A few years ago, I adapted the Inter-American Foundation’s Grassroots Development Framework (GDF) for a project in Brazil.  The GDF is very thorough, providing a full range of metrics with which to create benchmarks to show year-to-year improvement.  I asked local staff for feedback (none given) and later to fill out an spreadsheet about their “basic needs.”

The results were entirely unsatisfying in terms of demonstrating need: the data they provided showed nearly everyone as living in houses with electricity and sewage.  Well after the evaluation cycle required by funders, I learned that most families crowded into abandoned houses, using tarps to create a space against a few walls and pirating electricity off of municipal wires.  Many of the families were mobile, moving in with family members or moving across town for work.  The evaluation failed to deliver anything that accurately depicted the dire circumstances of people’s living arrangements or provided a benchmark for future evaluation.

Local staff had no investment in the process of evaluation—yet another form from afar—and had no  idea how to represent the dynamic nature of people’s lives in raw data.  For example, the question “number of people served” is so insufficient.  Directly served?  In a school with 100 students, how do you categorize the people who get jobs from the project, the parents who stop beating their other children because of what they learn through parenting classes, the girls who drop out but still postpone pregnancy for another three years, and the maid who became so inspired that she took university prep classes?   Within social change projects, the change affects everyone who comes in contact with the work.  It is hard to count them.

Evaluation often presents a worldview that rewards large NGOs and those aligned with Western-style middle class culture, not indigenous activists who are often the best leaders of local social change projects.  The worldview of “more is better” is represented with “project cost per beneficiary” and concepts of “return on investment,” encouraging high numbers of beneficiaries.  NGOs can skew data towards large programs that “touch” a lot of people over focused programs that fundamentally change the course of a few people’s lives.

2.  From rigor to rigor mortis

Within our current paradigm of donor-driven priority-setting, more evaluation of NGO programs resembles a school that implements more testing in order to deal with children who are not learning. (Sadly, comparing NGO leaders to children accurately conveys the condescension many of them feel in dealing with donor requirements.)  Too much or inappropriate evaluation results in a decrease in innovative and potentially risky social change work.  As Steven Lawry writes in the Nonprofit Quarterly, “Too much rigor can lead to rigor mortis.”

Rigor leading to rigor mortis captures Givewell’s attempt to advise donors on good international education projects through thorough research and analysis.  In reviewing charities, Givewell begins “by reviewing all publicly available information about your organization including your website, external evaluations, and other relevant information.”  The requirement for external evaluations knocks out at least 90% of small to mid-size NGOs.  Indeed, the only education project anywhere in the world that Givewell endorses is Pratham in India.  Its external evaluation was conducted by MIT’s Poverty Action Lab, for which a Givewell board member worked in 2007.  “Thousands of hours have gone into finding our top-rated charities,” as their website claims, and they could only find one good education program in all the world?  With hurdles this high, the funding of good NGOs will be dead on arrival.

3.  Evaluation is rarely funded.

Raising money for small NGOs is much like patching together a quilt.  Most foundation funding comes in terms of project budgets, with NGOs using individual donations, event proceeds, and any fees for service they can muster to cover overhead.  In fifteen years of writing grants for international programs, I have only received one with full funding for evaluation.  The typical foundation proposal for small NGOs is for $10,000-$50,000 with little room for evaluation if the requested project is going to be funded out of this same pool of money.  In fact, charity evaluators focus on administrative spending, creating incentives for NGOs to skip rigorous evaluation lest they be accused of overloaded overhead.

Let’s fix the system

While the current system is not working, we can change the system to help NGOs run better and donors give better.

First, effective systems for measuring the effectiveness of social programs will require local NGO leaders being at the table during their development.  The current system has donors dictating what needs to be fixed and then dictating the metrics (sometimes implicitly through the application process) to measure how well local people have achieved what we want them to achieve.  Engaging local NGO leaders will help us to understand poverty and the biases that filter our understanding of its decline.  Academia has the concept of peer-review for publications; philanthropy needs a concept of peer-review when it comes to asking the right questions about projects engaged in poverty alleviation.

Second, let’s invest in the capacity building that local NGOs need and want with regards to evaluation.  Foundations  should fund a cross-sector team of philanthropists and NGO leaders to develop common evaluation tools that support local capacity building.  With every proposal, they should  add 10% to fund staff time to conduct evaluation that informs that organization’s and our learning about the issues at hand.  As I wrote about in Being Wrong, we need to create an environment in which it is okay to make mistakes within a setting of learning.

A few years ago, my colleague in Brazil won a prestigious national award.  During the nomination process, I was told that all nominees would be visited by a private detective to check out the validity of their work.  This seemed unnecessarily sinister—certainly out of the ordinary.

At the award banquet, my colleague said that a detective never did come by.  Yes he did, insisted the award staff.  My colleague then recalled a builder who stopped by one day to find out what type of program was running up the street from a property he claimed to be about the develop.  He was immediately surrounded by children who chattered with him for thirty minutes about the project and their experiences with it.  My colleague never had a chance to talk with him—she watched from afar while caught in a conversation.

As I think back, this was the best form of evaluation we could have asked for.  Beneficiaries were given the space to speak about their experiences in a form and venue comfortable to them.  As we invest more thought and money in the best ways to evaluate local programs, sometimes it is most helpful just to stop by and witness the work in person.

Public/Private Ventures gives us an excellent white paper on ways to improve evaluation of small social programs.  Here they describe The Benchmarking Project which engaged 200 workforce development organizations to develop collaboratively new systems for evaluation that responded to their organizations’ realities. 

Advertisements

One response to “Give Better

  1. Pingback: What Am I to You? | The Social Change Collaboratory

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s