Skip to content

In Search of Better Data about Nonprofits’ Programs

ID-100162561(1)What are we really asking for when we require nonprofits to produce data on performance, effectiveness and impact? While the surface logic is clear – we need to know this information – the full context and set of assumptions surrounding the request bears closer examination. Laura Quinn, founder of Idealware, breaks it down to reveal the barriers to generating good data and, further, calls out a few ways that the sector can and should support any request for more and better data.

Few would dispute that there is a lack of transparency in the nonprofit sector, but the blame for that transparency falls all too often—and far too easily—on nonprofits themselves. This refrain goes, “If only they would collect better data and better show their impact, it would be clear to funders and donors where to best spend their money.” While this type of thinking is hard to refute in theory, in practice it’s almost impossible to live up to.

To illustrate, let’s play through a hypothetical scenario: Let’s say you’re the data and program evaluation manager for a mid-sized human service nonprofit that provides counseling to victims of domestic violence in the large city of Springfield, with about 35 social workers in the field. It’s your job to help oversee the data systems, analyze data to identify how programs are going and how to improve them, report to funders and foundations on what they want to know, and to think strategically about how you’re measuring and evaluating in general.

With all the recent interest in data and measurement, you have substantial buy-in from your executive team to try to think strategically about you can best use data—after all, they hired you, and the very existence of your position speaks to their commitment. You also have the luxury of solid data systems that allow staff to enter data from any browser and see case data for their own clients, and that let you pull high level numbers and reports on a number of important metrics.

Sounds like you’re in good shape, right? Compared to a lot of nonprofits, you are—but you still have a lot to juggle. What are your biggest headaches likely to be?

  • Data Quality. Your social workers are generally on board with the idea of systematically entering data, but they’re already overworked and underpaid—should they stop to enter data if it means putting a woman’s life at risk? Entering data sometimes falls off the bottom of their critical priorities list, leaving the data out-of-date. You’re thinking through options: Would giving them mobile devices to enter data from the field help—and can you find funding for that? How about simply being stricter about data being part of their job—would that help, or would it damage morale for critical client-facing staff? What about trying to find the budget to hire someone just to help with the data entry? There are no easy solutions.
  • Providing Data to Funders. Let’s say your organization receives funding from two different state programs and three foundations, which is not at all unusual. There’s no standard set of metrics, so each foundation asks for its own, often requesting similar metrics with meaningful differences in definitions—so, for instance, one asks for detailed data on children vs. adults served and defines children as under 16, while another asks for similar data but defines children as dependents living in a parent’s household. What’s more, two funders ask for client-level data so they can do their own analysis. For one, you can download the appropriate data from your system and upload it to their system, but the other won’t accept an upload, so you need to one at a time copy and paste from your database all the data about the constituents you’ve served under their grant, field by field. (This may sound agonizing, but it’s not rare. A number of funders—especially government entities—require detailed data but don’t accept any form of upload or automatic data transfer, apparently expecting that nonprofits will not have any data systems of their own.)
  • Meeting Changing Data Requirements. It’s complicated enough providing all the metrics funders want, but every year about a third of your funders change their data requirements. What’s more, you’re not likely to be reporting to all your funders at the same time each year, so several times per year you’ll need to update your reports, your processes, and maybe even your systems to account for new requirements
  • Defining How Best to Measure for Improvement. A huge part of your job is making sure you have the right data to report to funders, but is that data actually useful to your organization? Does it help you understand what’s working and improve what isn’t? At best, funders are likely asking for a lot of disparate data, requiring some strategy to figure out how best to use it to improve your own programs. More likely, some of what would be truly useful to internal improvement requires additional reporting and analysis, so you need to make time to work with executive management to define precisely what should be measured and how, and to make that happen.
  • Trying to Measure Impact. These days, everyone wants information on actual impact. Many people will tell you it’s not enough to know how many people you’re serving or what happened to them after you served them—it’s also critical to understand the long-term impact of your services on the community. There’s just one problem: This type of measurement generally requires extensive, university-level research—often with control groups, enormous budgets and large spans of time. If someone had already done research relevant to your services, you could use that to define your impact based on more easily gathered data, but unfortunately, nothing exists. (In fact, it’s rare to be working in a program area where this kind of research does exist). Funders don’t seem interested in funding this type of research for the good of all the organizations doing this type of work, but seem to expect your organization to be able to produce it on your own with your very limited data and evaluation budget.
  • Fending off Bad Research. With so many demands for data that isn’t really “knowable,” it’s tempting to take on research projects that might appear to address them but don’t provide any real value to your organization. Which means you spend a lot of time trying to dissuade the powers-that-be from taking on foolish research projects that can’t possibly provide useful data on your limited budget.
  • Proving Your Value. Even as you think through all this, you’re often called upon to prove that the money the funders are spending on you makes sense—after all, your salary isn’t directly going to help the enormous amount of women who need help, and who’s to say all your work isn’t just a waste of money? You’re asked on a monthly basis to show how you’re saving the organization money or helping with fundraising, and there’s always the lurking danger that the executive team will no longer prioritize data and evaluation and you’ll be out of work.

Not an easy job, right? Some might say it’s nearly impossible.

But for many, if not most, small to midsized nonprofits, the reality is even worse. Remember, this example assumed that you had the money and buy-in to get up and running with solid data systems, which is probably not an accurate assumption for the vast majority of nonprofits. It also assumed that there was actually a person in the organization able to put any strategic thought to using data effectively on top of all things needed just to keep the doors open—again, not a likely assumption.

The point of this hypothetical exercise is, primarily, to show that we can’t assume nonprofits have the resources to provide high quality data about their own effectiveness. While that might seem like an easy and obvious thing for them to be able to do, it’s not—not in the least.

How do we get them to a point where that’s possible? It would take more than just a little training or a second look at their priorities. They’d need sizable investments in a number of areas. They’d need help with technology, and to understand how to best make use of data and metrics on a limited budget. They’d need a rationalized set of metrics and indicators that they’re expected to report on, standardized as much as possible per sector with a standard way to provide them to those who need them.

Funders need to understand what is and isn’t feasible, and to redirect the focus of their desire for community impact evaluations from small nonprofits to the university and research world so the nonprofits they support can be unencumbered to work toward a better world.

We all need to understand that if we as a sector lean on nonprofits to provide data they simply don’t have the infrastructure to provide, what we’ll get is not better data—in fact, we may data that’s worse. Organizations pushed to provide impact data to get funding will provide something, but it’s not likely to be the high quality data or strategic metrics that would actually help them improve, or that would help the sector learn anything about the effectiveness of the services they provide.

These organizations rely on funders to help them meet their missions, but sometimes the burdens put on them by the reporting requirements that come with that funding can make it more difficult for them to carry out their work.