Skip to content

Revisiting The Frame For Evaluation

reframe 2We can nearly wear the tread off of a word during cycles of popularity or necessary use. “Impact”  fits this category. Regardless of possible misuse or overuse, however, it’s worth the attempt to maintain a real definition and to use the word meaningfully. There is little option of abandoning it without playing word games.  For the social sector, “impact” is a uniquely complex proposition that involves various types of data and measures to demonstrate success. Evaluation is a route to defining impact clearly. Brenna Marea Powell, Associate Director, Stanford Center on International Conflict and Negotiation, resets the frame for Evaluation as we open this theme on Markets For Good.

I’ve noticed that the word “impact” can elicit groans.  Even worse—“impact evaluation”—which conjures up jargon-riddled efforts to quantify the unquantifiable and dull the feel-good factor.  Talk about impact is everywhere, but there’s a good deal of muddiness about what we’re doing when we try to measure impact, and a lack of enthusiasm about why we’d want to do so.

Meaningful impact evaluation is a learning tool, not a report card or a validation exercise.  It’s useful because it allows us to learn about what we’re doing, assess whether we’re achieving our outcome goals, and recalibrate our approach based on what we find.  It helps us be smart about what we do.  The alternative is fumbling around with the lights off.

So if you feel yourself confused or on the verge of a groan, here is a very quick framework that lays out the core elements of a meaningful impact evaluation in layman’s terms.

The purpose of impact evaluation is to understand whether a given initiative (or intervention, in more formal language) is having the broader effects it is designed to have.  A really good impact evaluation rests on four questions:

1) What are the intended effects of this intervention?

2) What is the evidence that this intervention is in fact having these effects?

3) What can we learn about why (or why not) this intervention is having these effects?

4) Is there any evidence that the intervention is having unintended effects we should care about?

No fanfare, no “Mission Accomplished” banners.  A strong evaluation will address these questions clearly and in simple terms.  Here are some keys to doing so.

Specifying effects.  The first question means spelling out what the intended effects are on which population/s, and on what kind of timeframe.  There may be multiple effects—many initiatives are designed to have layers of effects on individuals, households, and communities.  It matters less whether the intended effects appear optimistic, cautious, or naïve.  What is important is that they are explicitly articulated, because without a clear answer to this question there’s really nothing to evaluate.

Observing effects. We have to decide how we would recognize the effects if we saw them.  What are the observable features of the effects we’re looking for?  These might be indicators related to health outcomes or economic well-being, or behaviors and attitudes that can be observed and measured with a little creativity.  It’s less likely to be impossible than you think.

The right counterfactual.  Once we understand what effects we’re looking for, we want to know whether any effects we observe can be attributed to our intervention (as opposed to some other change going on in society).  This means finding the right counterfactual or comparison group.  We need to compare the treatment group (the population where the intervention has been implemented) to another group that best approximates our treatment group (only without the treatment).  There are different ways to do this depending on the context—randomization, careful matching informed by good contextual knowledge, or other techniques.

Identifying mechanisms.  Sophisticated tools like a randomized controlled trial can help you make causal attribution, but they don’t necessarily uncover mechanisms.  In other words, they can help you understand whether something is working, but they don’t always tell you why it is (or is not) working.  Varying the intervention across treatment groups to uncover aspects that may be more or less successful is a good idea.  Doing some good interview or survey work with participants in the study should be mandatory.  Quantitative rigor is important, really understanding how something works requires talking to people.

Uncovering unintended consequences.  Consciously looking for any unintended consequences is both smart to do, and something we owe the communities in which we work.  Finding unintended consequences requires asking.

There are different ways of doing good impact evaluation, and varying timeframes as well—ranging from rapid-feedback prototyping to long-term studies.  Most critically, understanding impact requires a real desire to learn and grow.  Answering the four questions I’ve laid out can be a guide to doing so.