Members of the Markets for Good community are strong advocates for increasing the availability and use of data in the social sector. But when it comes to impact data, recent progress on increasing the availability of impact data is not yet backed up by meaningful use. Perhaps this is because we’re not yet clear enough on how we’ll use data when we have it. By diving into the detail of who’s asking questions about impact, and what they need to know, we may be able to add new momentum to the search for better data, and ultimately greater impact.
It is unquestionably the case that the social sector has made great strides forward on impact measurement in the last decade. The majority of charities and social enterprises today say they are measuring impact; that measurement is a necessary component of delivering effective programmes; and that they are doing more than they were five years ago (as NPC’s 2012 research showed). NPC itself has played a role in that shift, along with the impact movement within the UK and internationally.
These are exciting times. The Social Impact Investment Taskforce launched its reports a few weeks ago, identifying impact measurement as the critical cornerstone of the development of the impact investment market. I was delighted to play a role in co-chairing its working group on impact measurement, to produce guidelines that will help investors develop measurement frameworks that ensure the growing market lives up to its potential to create impact.
Working on the Taskforce impact measurement working group gave me a great opportunity to reflect on where we have got to and the challenges that remain. One of the insights that emerged for me was the importance of absolute clarity about the purpose of impact measurement. Our research found that three purposes shone through—to grow the market by demonstrating the impact that can be achieved, to be accountable to those we aim to serve, and to provide value in itself to those involved in measurement (for example, learning and improving).
Reflecting on these led me to question how well we are living up to them in the current application of impact measurement in the social sector. My answer is that we still have a long way to go, and that getting there will require embarking on a new journey.
NPC’s 2012 research found that the main driver for impact measurement is meeting funders’ requirements. So perhaps we are meeting the first purpose well? But only 10% of organisations say impact measurement has helped them attract new funding. And research with funders found that only a minority say they are using impact evidence to inform funding decisions. The steer from the field is that it’s actually much worse than this—organisations with good evidence of impact, or those that use evidence to learn and improve, generally say they don’t feel they are more successful in their fundraising as a result.
What about accountability? There are many ways in which organisations can hold themselves accountable—both proactively by engaging stakeholders, and more passively by ensuring they are able to show how activities are developed, designed and evaluated to ensure they match with real needs and deliver appropriate outcomes. While we lack good evidence in this area, there are few obvious examples of charities and social enterprises using their impact reporting proactively to hold themselves accountable to those they aim to serve. And it’s rarer still for funders, for example, to actively compare what beneficiaries say their needs and wants are to what those they fund provide.
And is impact measurement improving services? Our 2012 research found that this was the main benefit, but one that only a quarter of organisations were actually achieving. Here, I am aware of a minority of organisations that have embedded impact measurement in practice and are able to use it to learn about and manage and improve programmes. Organisations like Citizens Advice, Street League and CAADA (Coordinated Action Against Domestic Abuse) can show how they use data to drive decision-making, resource allocation and service improvement. But these examples are still too uncommon, and there is a general sense that impact practice is not yet driving continuous improvement on the whole.
Perhaps, then, if impact measurement is mainly driven by funders’ requirements, the current state of the field is that it is often no more than an exercise in box-ticking.
In my next blog, I will offer some thoughts on what a new phase of impact measurement could look like.