Skip to content

Finding The Balance Between Standardization And Innovation

npc logoTris Lumley will be presenting NPC’s research on the state of the UK nonprofit sector and the need for a paradigm shift incorporating shared measurement, data labs, a systems perspective on scaling, and a new mindset for leadership at a Markets for Good: Live event hosted by Liquidnet on 21st March in New York City. You’re invited to join the debate, share your perspective and explore how we can support each other’s efforts to transform the social sector. Sign up here.  But first, Tris makes a case for shared measurement here.

I’m a huge fan of what we call ‘shared measurement’—outcomes frameworks incorporating a set of measurement tools that are used across a field—for example youth employability. NPC has spent a great deal of time and effort over recent years developing these approaches, such as the JET (Journeys to Employment) Framework, and learning from other attempts across the world to develop them.

But the subject consistently raises controversy in the social sector and the evaluation industry. In fact I find there’s no better way to put the cat among the pigeons in a meeting of evaluators than to tell them you’re developing a common outcomes framework for a field. Not that I spend my time trying to stir up controversy…!

How could a single framework capture the diversity of the sector? Won’t standardising outcome measurement lead to a compliance mindset rather than nonprofits really ‘owning’ and embedding measurement? Isn’t there a danger that shared outcome frameworks will kill innovation and creativity as everyone moves to standardize the work they do as well as how they measure it?

Actually I think there’s a huge challenge here for those of us who advocate that social purpose organisations should be clear about their goals, strategies, and manage and measure their progress towards achieving them. And I believe it’s a challenge that can only be solved through common frameworks. But I fully acknowledge the danger of creating top-down frameworks that are imposed on a field, and the need to balance the top-down with the bottom-up approach.

Here’s why we have to move to shared frameworks.

I believe the fundamental core purpose of outcome measurement and evaluation is to help social purpose organisations to manage, learn about and improve their results—that’s the best way to ensure that they’re accountable to the people they aim to serve. I’ve written and spoken about this many times over the years, and while it’s important that funders and investors get the information they need to make decisions based on results, until we live in a world where they consistently do make evidence-based decisions, we can’t put all our eggs in this particular basket.

If the purpose of measurement is to learn and improve, why do we need shared frameworks? Isn’t it better for an organisation to develop a framework that’s perfectly tailored to its own activities? The answer, I believe, is categorically no. While a bespoke measurement framework can help you to establish whether you’re doing better than you were last year, it can’t tell you whether you’re doing better than the project in the next town. For that, you need a common framework.

Coordinated Action Against Domestic Abuse (CAADA) is a UK charity that has pioneered a common data platform called Insights. Because people working with those who are experiencing violence use the same questions—the same core data platform—different projects can benchmark their performance against each other. Then they can learn what’s being done differently, and implement changes. They might find, for example, that one project achieves twice as high a rate of disclosures of mental health problems because they have a qualified mental health practitioner on the staff. And that if the second project receives training in how to broach mental health issues better, they can significantly raise their own rate of disclosures.

t lumley graphic 01 F

Source: CAADA Insights sample report, not based on actual data

Perhaps most powerfully, they find that if domestic violence projects are located in hospitals, they can get help to the people who need it at least a year earlier than they would receive help in a non-clinical setting. That’s where the results of violence are often first presented, if there’s someone there to look for them.

But what of the fear that standardising measurement leads to standardising activities? In all the time I’ve been working on this subject I’ve seen no evidence of this in practice. Quite the opposite—common frameworks allow learning and improvement because they allow comparisons with other approaches. And the flip side of a lack of common frameworks is that actually services are often measured on the only available comparable measure—cost. Certainly in the UK, a great deal of service provision is dictated by a drive to minimise cost, not maximize outcomes.

If you acknowledge the need for shared frameworks, there’s still one critical caveat to take on board. We must avoid imposing frameworks top-down, as they won’t fit with real services and activities, they will be treated as a compliance reporting exercise, they will create an unnecessary burden on projects, and they won’t ultimately be embedded in practice and used to manage and improve outcomes. So shared frameworks have to be developed with the field, and consensus built around what the core common outcomes should be, and which measurement approaches are suitable and proportionate for the real world.

But with that important constraint on how common frameworks are developed, the result is a foundation for the field that can be built on to create powerful, and potentially transformative change.

A project that’s just getting started on outcome measurement can pick up a shared framework and use it to guide the development of its own measurement approach. It doesn’t have to start from scratch (inefficient) and without the learning that’s been generated by the field already (ineffective).

t lumley graphic 02 F

Source: Blueprint for Shared Measurement, NPC for Inspiring Impact, 2013

Different projects and organisations can learn from each other. Communities of practice can develop best practices because they can see the results, and outlaw poor practices because they’re impossible to ignore. Ultimately, organisations can start to establish how their work fits together with those working on related outcomes. Value chains can form. Collaboration becomes purposeful. Collective impact can follow.

And of course funders and investors can finally start making rational resource allocation decisions—funnelling resources to effective interventions and value chains, and de-prioritising what doesn’t work.

Shared measurement frameworks are a part of the core infrastructure of a social sector that’s focused on maximizing results for those we exist to serve. All the reasons for not developing and using them, I believe, are more about the needs of the organisations and individuals that constitute the social sector than they are about the needs of those on whose behalf we work.