Skip to content

Digital Impact was created by the Digital Civil Society Lab at Stanford PACS and was managed until 2024. It is no longer being updated.

Finding More Value In Charities’ Research

MFG Archive

Caroline Fiennes of Giving Evidence walks us through her views on lost research, data and the necessary information that could improve your social programs

 

Quite possibly, some NGO has discovered a great way to, say, prevent re-offending or improve literacy, but nobody else knows about it so their genius innovation doesn’t spread. Surely it’s unacceptable that people may miss out simply because research can’t be found and/or isn’t clear.

The problem seems to be that, although NGOs conduct masses of research (their ‘monitoring and evaluation’, which is research even though not always framed as such, and research to inform policy work), much of it isn’t findable and/or isn’t clear. A lot of NGO-generated research never gets published: it’s hard to know precisely how much but one study we came across recently (and are trying to get published!) implied that just a measly 2% gets published. Other research is published but only in places that nobody would know to look, such as on a small organisation’s own website. And some which is published isn’t at all clear about what the intervention actually was, or what research they did, or what the results were. For example, this systematic review of programs to teach cookery skills found that, of programs run by charities, ‘very few reports described the actual content and format of classes’.

That information exists but isn’t findable is a failure of information infrastructure – the system for encouraging and enabling material to be published, clear and accessible.

Giving Evidence is therefore delighted to be running a project exploring getting NGOs to publish more of their research in a way that’s findable, and publishing enough detail to make it useful. The project is essentially a big consultation on two core concepts and we invite your input. First, on findability, that there should be some standard way of ‘tagging’ research online so that it’s easy to find. And second, on clarity, when NGOs publish research, they should include detail on several key items (a little like the standard abstracts in medical trial reports.) Those items might be:

i)     Description of the intervention, in enough detail that somebody could replicate it if it fits their work;

ii)    Description of the people they worked with. For instance, if the program is getting people back to work, or to avoid reoffending, the success rate is meaningless unless the cohort is adequately described.

iii)   The research question, i.e., the question(s) which the researchers set out to answer;

iv)   The research method and how it was used. For example, if 20 beneficiaries were interviewed, it’s essential to know how those 20 were chosen (because if beneficiaries self-select or if the NGO staff choose, the results may be biased because possibly only the most cheery beneficiaries get included, which is avoided by choosing them at random).

Together, (iii) and (iv) show the quality of the research and therefore how reliable it is.

v)    The results.

One campaigner has suggested also including (vi) how the researchers guarded against bias in the results, and (vii) how we can know that the results aren’t just a function of chance.

We’re taking as a case study the UK criminal justice sector, though the problems arise in many parts of the charitable world and so may the solution/s. More detail on the project is here.

This project is a side-effect of Giving Evidence’s work on learning lessons from how medicine uses evidence, published here. In medicine, though research is in many ways highly sophisticated, the leading health systems researchers Iain Chalmers and Paul Glasziou found that “adequate information on interventions is available in [only] 60% of reports on clinical trials”. Other weaknesses in research reporting render at least half of is wasted. Monitoring and evaluation alone just in the UK costs charities about £1bn every year: standardizing tagging and research abstracts would be a cheap way to wring more value from that.

Do please get in touch if you have an opinion or relevant expertise. Perhaps you are: an NGO producing research (would you want to report this way? Would you use other NGOs’ research more if it were like this?); funder, academic or government agency who might use NGOs’ research (would this help you?); an information architecture expert (how could you improve on this idea?); researcher (have you seen this done elsewhere, and what can we learn from that?)

We’re particularly interested in is:

a)   What needs to be in place for such a tagging system to work, and

b)   Your views on the items proposed above for the abstract.

Also get in touch if you are a funder in a different sector and interested in improving the findability and clarity of research in your sector.

As we know, the charity sector is full of innovation and ours is one of many projects to ensure that we get full value from that: by enabling good innovations to succeed and poor ones to sink. This project is essentially a consultation and we welcome your views.

 


Many thanks to Caroline for sharing her insights and consultation project to help develop our ability to learn from our peers across the sector. With so many of us distinctly aware of the frustrations that come with researching impact of programs, we hope you will take Caroline up on her offer to feedback. If you have any comments or questions, please do get in touch over Twitter, or below