Rounding out our conversation on evaluation is this view from the standpoint of technology, by David Henderson, founder of Idealistics. If technology is a tool, then we might reframe the new fixation on data and tech by recognizing that tools don’t build houses. People do. David goes deeper on the topic to argue for a vision (and skill set) correction in the way we regard evaluation and technology.
…
I run a data analytics firm that utilizes technology to help organizations evaluate their effectiveness and improve results. Given the hype around data in the social sector, it’s not a bad time to be a nerd for social good.
But amidst all this hope and promise, I’m afraid our collective enthusiasm for hackathons and data visualizations have allowed us to stray from the following two simple rules:
- Technology should help, not hurt
- People make decisions, not data
If evaluative metrics are to become a core component of the social sector’s work, frontline organizations need to have the freedom to use the technologies that work for them, free from funder mandates. More important, the sector needs to invest in raising the collective data literacy of all social sector workers.
Indeed, data analytics should not be the realm of us nerds alone. Instead, a basic understanding of evaluative principles should be a prerequisite for entering the field.
Teched out
Every organization should have a data collection system, as has long been argued by nonprofit consultant David Hunter. While there are organizations that don’t have any formalized mechanism to capture outcomes data, the organizations I work with typically enter their outcomes data into multiple databases, one for each funder while also maintaining a database for themselves.
This is a ludicrous practice. I understand the desire for funding organizations to receive raw data from their grantees, and whole heartedly support this objective. But grantees should be free to use the data collection system of their choosing, instead of being forced to duplicate entries into any number of proprietary databases. Not only does this practice waste frontline organizations’ time, it also conflates evaluation with compliance reporting, leaving organizations with little appetite to explore data for program improvement.
Funding entities have a significant opportunity to remove an unnecessary pain from the evaluation process by backing away from forcing grantees to enter information into proprietary systems and instead supporting data interoperability via open standards.
Building data repositories is a fairly trivial undertaking, which I believe is why so many funders have opted to develop their own. Of course, the real cost of these systems is not the financial burden of providing a database to grantees, but rather the social loss of grantees’ time and energy spent on duplicative data entry.
Organizational learning
Where and how to store data is not a terribly interesting problem. The more interesting question is what to do with outcomes metrics, and how to use evaluative techniques to improve social impact.
While fancy visualizations and infographics have raised the visibility of data in the social sector – these efforts have done little to raise our collective evaluative IQ.
In my work, I help social sector organizations use their outcomes metrics to develop predictive models that not only inform how an organization is doing, but provide insight into how their interventions can improve. Whatever the analytical focus of each of my engagements, the subtext is always a focus on cultivating organizational competence around evaluative inquiry.
Of course, learning doesn’t happen in punctuated points. There is a reason students are assigned homework and take multiple quizzes in an academic term prior to taking a final exam. Homework and quizzes improve learning, while the final exam is an opportunity to demonstrate knowledge.
But to date, we as a sector tend to treat evaluation as the equivalent of administering a final exam to a room full of students who have not attended class. Funders require annual reports demonstrating success and evaluation consultants swoop in to write one-off reports that effectively assign organizations letter grades. This is the wrong way to approach evaluation.
A better approach is to invest in developing social sector workers’ ability to understand what their data says, what questions evaluative inquiry can inform, and appreciation for the limits of statistical analysis.
Ultimately, understanding evaluative principles is everyone’s responsibility. Just as everyone in the medical profession knows what a carotid artery is, so too should everyone in the social sector know what a counterfactual is. To get there, we need to support organizations in choosing one data collection scheme that works for them, and reject the anti-intellectualisms surrounding evaluation that favors pretty graphs over mastering terminology.