Reliable and accessible data in the social sector is hard to come by. But, changing the way we generate, use, and share data is more than simply creating better collections of data. Independent consultant, Cynthia Gibson Ph.D., takes on this topic, starting with a brief look back at how we’ve arrived at this point and following with what we’ll need to move forward. For an alternate, extended version of this post, please see Cindy’s entry on the blog site for the Center for Effective Philanthropy.
The idea of an information marketplace for the social sector isn’t a new one. But it’s one that’s never quite managed to jump from the drafting table to full-blown operation.
Despite the fits and starts, however, there have been important steps forward. A decade ago, a handful of national funders converged to support some of the country’s very first organizations that were using then-new technologies to collect, aggregate, and disseminate information and data about nonprofits—among them, Guidestar, VolunteerMatch, Network for Good, Idealist.org, and TechSoup.
Today, those organizations have become some of the most successful in the social sector and form a vital part of a new kind of information infrastructure. What’s been missing are more targeted opportunities for these groups and others to pool the talents and resources of each to create something bigger and potentially better: the kind of accessible information-sharing and information-generating system proposed by Markets For Good.
Who doesn’t want that, especially now, given the onslaught of “big data” and the vocal demand for organizations to demonstrate their effectiveness and ever-elusive “impact?”
So, the challenge isn’t likely to be rallying the troops around this idea. What will be a challenge is grappling with the really hard questions that will be essential to answer before investing millions in something that may end up in the graveyard of data initiatives. Questions like: Who’s asking for data? Will people use it? What’s the incentive to collaborate? What kinds of data? Does having access to data mean that it will lead to real knowledge that a wide range of constituencies—including investors and social sector groups—can really apply and use?
Among the many questions, the biggest one is data for what? To help donors make better philanthropic investment decisions? To help nonprofits benchmark their performance across other organizations or sub-sectors? To help governments assess which intermediaries are achieving their goals? To provide the public with information about these organizations and what they’re doing? To help inform policy debates?
“Data for what?” is a question that requires serious thinking about which variables will be used in each circumstance for each constituency, for what purpose, and under which assumptions. It also underscores that data aren’t only a bunch of numbers: they’re merely vehicles through which to analyze, contextualize, and apply those numbers in ways that will help practitioners, policymakers, beneficiaries, and investors make better decisions, improve services, or create more responsive programs or legislation.
But, even if the data go beyond the numbers, will this lead to real knowledge about what really matters?
Collecting data—of all kinds—is one thing, but analyzing it is another and requires that we put it into context. We can churn out reams of hard data, but it is of no value if no one is there to make it comprehensible and applicable in the real world. Just because we have stacks of outputs showing that counseling “improves people’s lives,” for example, doesn’t mean that policymakers will rush to fund counseling programs.
Further, improving the ways we collect and share information of all kinds—not just numbers—is not a solution in itself. That assumes that information-sharing is an unfettered good, and, further that all information is of high quality and can be easily put to use. This isn’t true. Yelp might offer helpful information, but it’s hardly the place to get reliable information that’s been vetted or has some evidence behind it. The mountain of big data being generated is exciting but it’s also generating misinformation that can be (and is already being) manipulated by people/institutions to convey the findings they want. That leads to more echo chambers or naïve trusting of questionable output.
It’ll be important for the social sector to generate trusted and reliable information about “what works.” It will require formal and informal collaborations that allow us (as well as donors and constituents) to thoughtfully evaluate and analyze information. Incorporating that kind of vetting procedure would strengthen the infrastructure and build it up to a new and important level. For example, we might collect and share data in the commons via an open source approach (a la Wikipedia), but then analyze it more deeply by working with experts weighing in on what’s real and what’s noise. This “expert” capacity should be found within our organizations and be sought after in external networks. After all, research shows that the best decisions are made when “real people” and experts work together.