We’ve been thinking a lot about nonprofit results data at Idealware. As part of a project for the Bill & Melinda Gates Foundation, we’ve conducted more than 30 interviews, a survey, and a huge amount of research to understand both how organizations think about results data—whether that’s outcomes, outputs, evaluation data, program management data, or any of a number of other current terms—and what’s being done in the space.
As part of that research, we’ve itemized a sizable list of initiatives that provide educational resources, research, or standards to help nonprofits manage results data. It’s a really interesting list, and includes more than 370 different initiatives—because of the nature of our research techniques, they’re mostly in the U.S., but about 65 are in other countries or global in nature. They span everything from:
- Open data initiatives from governments and large organizations
- Standards-setting initiatives trying to drive organizations toward tracking comparable data
- Definitions of standard indicators across organizations for particular sectors or programs
- Aggregations of research that’s been done in particular sectors
- Dashboards of indicators for particular communities or topic areas
As we start to dive into the analysis of this dataset, there’s a few things that surprise us already, including the following:
- There are so many initiatives. Are they different enough to all be useful, or are there duplicates that are redundant?
- There are so many different organizations suggesting data or indicator standards. Does this help drive us toward standards, or do these standards conflict?
- For many different sectors, there’s an enormous amount of research available as to what’s been done and how to apply it to nonprofit programs. Are nonprofit organizations aware of all of it?
- It’s much easier to find initiatives in some sectors than others. For instance, we have more than 100 Community Building initiatives on our list… but that’s because the Community Indicator Consortium has aggregated all of the initiatives about which it is aware.
A key purpose of our research was to understand what was easily “discoverable,” as opposed to just uncovering everything that exists—but how do we now compare the data across sectors in a sensible way? We haven’t yet finished analyzing it, but our dataset is reasonably complete.
“We’ve conducted interviews, a survey, and a huge amount of research to understand how organizations think about results data.”
We’ve coded our 374 initiatives with both descriptive metadata and information about the type of ways they help the sector in collecting, aggregating, and standardizing data and making it discoverable to others. Want to see the raw data in the next month or so? Email me at email@example.com and I can send it to you in Excel.
Our next step, which begins shortly, is to launch a microsite that provides infographics and breakdowns of the existing initiatives we found as well as the ability to explore the dataset we’ve developed. We’re excited to be working alongside Markets For Good to build this new platform, so stay tuned for more information as the project progresses.
Many thanks to Laura Quinn and the team at Idealware for their hard work and research, we’re looking forward to helping launch the next stage of the project here on the Markets For Good platform. For further updates, follow them on Twitter.