Skip to content

Digital Impact was created by the Digital Civil Society Lab at Stanford PACS and was managed until 2024. It is no longer being updated.

Meeting the Challenge of Impact Evaluation 

4Q4 Podcast, Interviews

Data scientist Nick Hamlin explains how GlobalGiving ran an impact study to measure its own effectiveness and why all nonprofits should follow suit.

Digital Impact 4Q4 Podcast: Nick Hamlin on Impact Evaluation

SUBSCRIBE TO THIS PODCAST ON iTUNES.

00:00 CHRIS DELATORRE: Welcome to Digital Impact 4Q4, I’m Chris Delatorre. Today’s four questions are for Nick Hamlin, Data Scientist at GlobalGiving. The member organization has put itself under the microscope with an impact study designed to measure its own effectiveness. Nick, let’s set the GlobalGiving study up for our listeners. What did the study set out to do and did you find what you were looking for?

00:30 NICK HAMLIN: Well, GlobalGiving is the first and largest crowdfunding community for global nonprofits. That means we support organizations working in 170 countries around the world by connecting them with money, information, networks, ideas — whatever they need to be more effective. It’s a complex mission and so measuring the impact of our work is challenging. We needed to design our first formal impact evaluation to at least start to unpack that. What we did was we partnered with an organization called Pact. Pact maintains an impact measurement framework called the Organizational Performance Index, or the OPI.

“It’s even more important for smaller organizations to embrace this idea of impact research.”

The OPI measures the capacity of a nonprofit organization across eight different categories. These are focus areas like program delivery, learning, reach, diversity of funding – things like that. And it provides a methodology for quantitatively measuring capacity in those eight categories. This is a tool that is well recognized in the sector — it’s used by USAID, the Aga Khan Foundation — so we were confident that there’s a strong body of work going into it.

What we did was we looked at the OPI scores for a group of GlobalGiving partners in India and compared them to a similar group of organizations, also in India, that were not part of GlobalGiving. And so by looking at the changes in the scores over time for those two groups, we were able to get a sense of which areas we were having an impact in and which we weren’t. And what we saw was, of those eight categories, the one where we saw a strong signal of GlobalGiving’s impact was in how we were able to help organizations become more community led and better at using feedback from their target populations.

02:13 CHRIS DELATORRE: You summarized your results in a blog you posted in October. You noted that a large sample size helps to ensure reliability. Now, GlobalGiving operates in nearly every country. When I hear that I think of an organization with a hefty operating budget but in fact, GlobalGiving is a fairly small operation, right? So this question is a 2-for-1. First, between control and treatment groups, what number are we talking? And second, why should small nonprofits or those with limited resources not be afraid to embrace this kind of research?

02:44 NICK HAMLIN: To your first question, we’ve got about 30 organizations, each in the treatment group and in the control group. On your second question, I would argue that it’s even more important for smaller organizations to embrace this idea of impact research — just because there’s less room in your resource envelope for spinning wheels on projects that aren’t serving your mission. That said, that creates a bit of a catch-22, right? Because in order to figure out where to focus your work you have to invest in understanding your work and the impact that it’s having. But I think if that’s done well, this kind of research is certainly worth the longterm payoff because of the kind of insights that it can yield.

It’s also worth remembering that these initial tests need not be totally perfect or wide raging. Instead, even simple pilot studies and being really transparent about reporting the progress that you’re making in those studies over time, help create a culture of data-driven decision making that an organization can then build around as it moves towards larger and larger evaluations of its work.

03:49 CHRIS DELATORRE: Now about your method. You compared vetting prospects to full members. But first you ensured that organizations in each group were as similar as possible, and I quote “so that our results aren’t biased by underlying differences between the control and treatment groups.” What are a few examples of the underlying differences you mention that might introduce bias?

04:13 NICK HAMLIN: That’s a great question. And really this comes down to trying to rule out as many possible other explanations for the results that we’re seeing, so that then we can increase our confidence that the results we’re seeing are driven by GlobalGiving. That’s really challenging but I’ll give you one important example here. Ideally we would randomly assign organizations to the treatment group or the control group. That’s what makes a randomized control trial a randomized control trial. But we can’t do that here. For logistic reasons, for ethical reasons, we can’t just kick organizations off of GlobalGiving.

“We’ve built out a solid foundation of data, analytics, process — a culture built around iterative learning.”

So instead, we might say, ok we’ll look at organizations that are on the site and compare them to organizations that aren’t. If we just did that and didn’t take any additional steps, it’s probably pretty plausible that any difference that we saw in the results would be driven, at least in part, by a selection bias. This would look like situations where the organizations that are more likely to succeed anyway might just be more inclined to join GlobalGiving. And so we’d have a situation where the GlobalGiving organizations might score more highly on the OPI. But that’s not really reflective of the impact that we’re having.

This is why we imposed the vetting restrictions. That is, as you said, all organizations in the study, whether in the treatment group or the control group, have been through the GlobalGiving vetting process. This means that no matter which group the organization is in, all of them have attained a certain consistent baseline level of capacity. That doesn’t totally rule out the potential for the selection bias that I’m talking about but it dramatically reduces the potential for that to become a problem.

05:57 CHRIS DELATORRE: Final question. At one point in your blog you suggest enlisting outside help. What advice do you have for listeners who are interested but not sure where to start? How can they help build on this progress going forward?

06:11 NICK HAMLIN: I think for many organizations and many people in general, it can be really scary to admit that you don’t know something. That’s just the nature of how human brains work. But this kind of work is really challenging and no one person or organization has all the answers. So, my advice would be to reach out to all of the vast numbers of other people out there who are eager to help with this kind of work. That can be academic researchers, that can be peer organizations, that can be other people who have gone through this process before. If we had tried to do this completely in a vacuum and not involved outside voices, we would never have made any progress.

Nick Hamlin explores the impact study in a post published in April. “It was too important for us not to try,” he writes.

The other piece of advice I would give would be to, as I said before, don’t be afraid to start with something small. We’re able to get to the point of doing this initial impact study because we’ve built out a solid foundation of data, analytics, process — a culture built around iterative learning—that allow us to track the progress of more specific smaller programs before we started thinking about this more holistic view of GlobalGiving writ large.

The last thing I’ll say is to document your progress. The more of a consistent and clear record you can make of the learnings that you’re making, the better off you’re gonna be. And so, for those, your listeners who would like to learn more about our progress, I encourage you to check out globalgiving.org. We will continue to update the work that we’re doing there, particularly in our learn library, where you can find the blog post about this study as well as many other experiments that we’re running. We’re also available on all major social media platforms @GlobalGiving.

07:52 CHRIS DELATORRE: Nick Hamlin, Data Scientist at GlobalGiving, thank you.

Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data.