Skip to content

Forcing Nonprofits To Lie About Data

MFG Archive

Laura Quinn raises a common problem faced by nonprofits - paying for the cost of data. But what is the result of these ever growing costs in the search for better data?

 

Here’s a truth that’s rarely spoken: if a key funder asks a nonprofit for data it doesn’t really have, it’s standard practice for nonprofits to simply invent plausible data to fill the gaps.

 

On its face, this seems outrageous. As a sector, how can we ever hope to be able to effectively understand impact and compare interventions if half the results data we think we have is simply fabricated? Who are these liars bringing us down, and where’s the nearest jail cell we can slap them into?

 

It’s not so clear cut, however, and the road we’ve travelled to get here is considerably more subtle. Data invention is the inevitable result of tying money to data without ensuring that a nonprofit’s infrastructure is sufficient – and sufficiently funded – to collect that data. It’s expensive and time consuming to collect lots of data about lots of things – often as expensive as providing a simple service itself.

 

For example, let’s say I’m running a nonprofit that works to eliminate school bullying. We’ve got a public hearing on a school board issue coming up, and we’re down to the wire canvassing for it. We’ve been lucky enough to get funding to manage a big volunteer canvassing campaign at the same level as our last big volunteer campaign, but this time, the funder has put an additional focus on metrics. In response, we’ve added a data-collection element – in addition to talking to each person about the importance of the issue, volunteers are asked to collect their Zip code, gender, and apparent age and race.

 

We all know data is only as good as the collection methods, and in the heat of the campaign, our volunteer-reported data gathering got a little iffy. Some volunteers decided the issue was more important than the data and simply tallied how many people they talked to. Others did not turn in any data sheets at all, and it’s unclear if they were tracking anything.

 

Now we’re facing a funder report asking us for campaign results. On the surface, the campaign was successful – we won the school board vote – but we’ve got to come up with some way to complete the fields asking for detailed metrics.

 

When we sought funding, we said we would talk to 2,500 people. We’re unsure precisely how many people we actually talked to, but we’re confident it was at least that many. Let’s say we put 2,551 so the number is not suspiciously precise.

 

For the percent of people age 55 or older, our data – representative of maybe half the people we actually talked to – says it’s about 20 percent, but we know some volunteers who canvassed several retirement homes did not turn in demographic data. Let’s say 27 percent, then.

 

These estimates are probably close to accurate, and our best guesses are well-meaning, but we’re presenting them as data. From our perspective as a nonprofit, they’re close enough, and better than telling the foundation we didn’t successfully track the data we were committed to tracking. But if the foundation then takes that information and compares it to other nonprofits’ data – which might be equally iffy – then any conclusions drawn about which demographics to speak to in order to make change happen, or how many, are invalid.

It’s expensive and time consuming to collect lots of data about lots of things – often as expensive as providing a simple service itself.

This is not an obscure example. If we could collect accurate numbers on how many nonprofits have ever fabricated results data (an interesting challenge in and of itself), I would put money that’s it’s way north of 50 percent of all nonprofits. Fabricating at least some of the data they report to funders is the day-to-day reality for many nonprofits.

 

Whose fault is this? It’s tempting to place the blame squarely on nonprofits, but I would focus more on problems in the design and funding of programs. In our example, the foundation has funded a campaign at the same level as a previous one but added new data requirements without funding them. Right or wrong, the assumption is that the nonprofit will cover it, though it’s just as likely that neither the foundation nor the nonprofit itself recognized the additional cost required.

 

The truth at the core of this issue is this: It’s almost always easier and less expensive to carry out your mission without collecting data.

 

It might seem like data collection is hardly any extra work – why can’t volunteers just write the information down? – but in reality, data collection often requires additional training, hiring more-reliable staff members or volunteers, and spot-checking and overseeing collection, all of which cost more. Fundamentally, running an outreach campaign that reliably collects data is simply more expensive than running one that touches just as many people but doesn’t collect data.

 

So. How do we ensure nonprofits are able to collect data without feeling like they need to fabricate it?

 

First we need to acknowledge as a sector that collecting data is more expensive than not collecting data. We need to stop pretending that nonprofits can easily collect data and present metrics without additional money or any compromise in quality or quantity-served. Then we need to work through the tradeoffs implicit in that – do we prefer to serve more people with less data collected? More data with fewer people served? More data but lower quality service (a choice unlikely to be popular)?

It’s almost always easier and less expensive to carry out your mission without collecting data.

It doesn’t serve anyone to simply deny that tradeoffs exist. We can ask for more data and expect the nonprofit to serve the same amount of people at the same level of quality for the same amount of money, but that won’t make it possible.

 

Most frequently, nonprofits that care deeply about people and services will sacrifice the data instead, and simply fabricate it for funders. If we demand more data without more funding, it logically follows that what we’ll get is simply data that’s more made up.


Many thanks to Laura Quinn, Executive Director of Idealware, for this latest op-ed piece. Please let us know what you think in the comments below or on Twitter.

To stay up to date with the latest Markets For Good articles and news, sign up to our newsletter here