Skip to content

Beware: Indicator Blindness

Bimetal_coil_reacts_to_lighterJulia Coffman… “Ask five people to free associate on “measuring social returns” and at least one is likely to mention indicators. Indicators are measures that signal whether certain expected conditions or results have been achieved. Measuring a set of indicators connected to a nonprofit’s strategy is probably the most common approach for capturing social returns. For example, after developing a theory of change, organizations identify indicators to capture the theory’s outputs, outcomes, and impacts. Those indicators are then tracked at regular intervals and trends are reported, often using dashboard-style reports.” [image: bimetallic indicator coil]

Tracking indicators to assess progress seems to make good sense. Indicators address questions such as “how much” of something was produced, or “to what extent” a certain result was observed. They help you to understand where you are, which direction you’re going, and how far you are from where you want to be. But relying on indicators alone as your measurement strategy can have substantial limitations. Measuring indicators tells us very little about what to do if we see something we don’t like or didn’t expect. For example, knowing that graduation rates are falling signals that there is a problem, but it tells us nothing about how to fix it.

Patti Patrizi and Liz Thompson recently coined the phrase “indicator blindness” to help warn against our tendency to get so focused on indicators that we fail to see other signals of success or failure that only can be uncovered through more thorough evaluation.

They say indicators work best when a great deal of certainty or evidence exists around a theory of change—when cause-and-effect relationships between activities and outcomes already have been established. Citing the example of immunizations, they say evidence has shown that immunizations are effective; therefore tracking how many immunizations are delivered is a meaningful indicator of a health system’s performance. The indicator is highly predictive of the outcome or return.

The problem, however, is that many nonprofits are implementing strategies based on theories of change that are untested and involve a great many uncertainties. To address big, hairy social problems, nonprofits are constantly trying new things, betting that certain activities will lead to certain outcomes and impacts without knowing that they will for sure or that everyone involve will behave as predicted. When this is the case, measuring indicators that signal if a strategy is playing out as expected actually tells a nonprofit very little about how that strategy is working or whether it was right in the first place. Indicator tracking just assumes the strategy is the right one, without addressing deeper questions about how activities were implemented and if and how they relate to eventual social returns.

The point here is not to discourage the use of indicators. Rather, it is to say that tracking indicators alone is not a complete evaluation approach for capturing social returns, especially for strategies that are new, complex, or untested. Indicators might provide data on outcomes and impacts and on what you did to achieve them, but they won’t say much about the relationship between them. As Patrizi and Thompson suggest, we need to go well beyond asking “what” and “how much.” We need to ask and answer “why,” “how,” and “for whom.”