Skip to content

Digital Impact was created by the Digital Civil Society Lab at Stanford PACS and was managed until 2024. It is no longer being updated.

The Death Of Evaluation

MFG Archive, Opinion

Andrew Means looks at whether program evaluation is dying, and what should replace it

Is program evaluation dying? This question has been swirling around my head the last few months. I don’t mean to imply that programs should stop evaluating their outcomes. I just find that the current framework of traditional, social science driven program evaluation is frankly not embracing the possibilities of today’s world. Put simply, program evaluation was not made for the age of big data.

To understand this, let’s think about a few of the key principles of program evaluation.

Traditional program evaluation is reflective, not predictive. It does what its name implies, looks backwards and evaluates, “Did this work?” This is fine except that while I am interested in knowing what worked in the past, I am more interested in learning how to make my program better in the future.

If I’m running a nonprofit, I want to be able to figure out ways to improve my program. I want this year to be better than last year, this cycle more effective than the last one. Traditional program evaluation wasn’t created to help me identify ways I can improve my program, it was created to prove whether my program worked or not.

Program evaluation actually undermines efforts to improve. Once I have my evaluation report I have less of an incentive to innovate or improve because I now have a piece of paper saying that what I do is working. If I were to change my intervention, that piece of paper would become less valid and thus less valuable.

Finally, program evaluation was built on the idea that data is scarce and expensive to collect. This meant that research was done on samples and extrapolated. That’s why most program evaluations don’t look at everyone you serve, they look at samples. Most organizations are doing evaluations every year and thus missing many people they serve. Built into the methodologies used in program evaluation are assumed constraints that no longer exist.

What the sector needs is a feedback mechanism that fosters innovation and improvement. What we need are tools and methods that predict outcomes in real time. What we need are systems that help us automatically collect data and store it in ways that are scalable and useful. What we need are social sector analytics.

Analytics does not have the methodological burden of traditional program evaluation. Its history is in helping leaders make better informed decisions. It provides a framework for improvement. It has grown up in the world of big data and doesn’t carry the burden of old methodologies.

Imagine this scenario. A student enters high school with a 45% chance of graduating due to a variety of factors including their academic performance, the fact that their single-mom doesn’t have a job, and the food instability his family is currently experiencing. Given these risk factors this student is sent to an afterschool program shown to help at-risk students catch-up academically, his mother is connected to a provider offering job counseling, and the local food pantry is alerted that the family’s food allotment should be temporarily increased. Every week or month, the probability of this student graduating is updated based upon how their various risk factors change. If something isn’t working the providers shift interventions or different organizations are brought in to meet the changing needs of the familiy. Each of the organizations involved with this family are able to see, in close to real-time, how their interventions are changing probable outcomes for the family.

This is a system that is outcomes focused, where interventions change in real time based on the changing needs of the individuals being served, and where data and technology help foster innovation throughout the whole process. Its using analytics to achieve a desired outcome while helping organizations understand and improve their impact in real time.

My hope is that we can move from the model of traditional program evaluation to the more nimble and innovative model of social sector analytics.

 


Many thanks for such insights on traditional program evaluation, Andrew. Be sure to follow him on Twitter, and do ask him any questions as this is a topic he is very passionate about. As ever, be sure to include Markets For Good, and don’t forget to use our new share buttons.