Skip to content

Digital Impact was created by the Digital Civil Society Lab at Stanford PACS and was managed until 2024. It is no longer being updated.

Expanding the Social Impact Measurement Toolkit

Opinion

Andrew Means advocates that social sector data practitioners take a more open-minded perspective toward alternate data methodologies.

This year, I had the opportunity to participate in both the Impact Convergence (ImpCon) and American Evaluation Association (AEA) conferences, two innovative forums for individuals helping social institutions measure and improve their impact.

The Impact Convergence conference focused on the impact measurement and impact investing community, while the AEA conference focused on the community of nonprofits and multilateral organizations interested in evaluation.

Attending these two events back-to-back highlighted how entrenched we social sector data practitioners can become in our own perspectives, methodologies, and philosophies. Many of us came to this work from a specific professional field—be it economics, social work, computer science, or statistics. Most of us have chosen to apply our skills to a specific issue or cause within the social sector. The more specialized we are, the stronger our opinions can be about what make for good data and rigorous analysis.

These perspectives can vary widely, even within the social sector. Some of us are Bayesians while others are Frequentists. Some are fastidious about sampling methods while others are comfortable extracting insights from less than perfect data sets. Some prefer writing code in R over Python, while others use tools like Stata or SPSS. Some turn their noses up at survey data while others swear by it. Some refuse to touch social media data while others see it as an essential resource.

There is nothing inherently wrong with having a strong preference for one set of data methods over another. The problem arises when we assume our preferred methods can answer every question about every problem, especially complex problems that span issues, geographies, and communities.

I understand this. We all want to believe that our methods are the best ones and that our answers are the most accurate.

But that often means that when someone comes along with a new method or speaks to us in a different “data language” we are skeptical rather than open-minded. The fact remains that sometimes other people’s methods are better than ours.

At gatherings like AEA or ImpCon, I often find myself debating the advantages of Bayesian versus Frequentist approaches (more fun than it sounds), or arguing the merits of predictive tools and machine learning. These conversations are usually fruitful and engaging, but I often feel that my colleagues and I are trying to prove each other wrong, and persuade each other that ours is in fact the best approach/method/tool/philosophy.

The truth is that these discussions do not have to be either/or. We should attend these gatherings and have these conversations not so that we can find the single best data method to solve our chosen social problem, but to increase our knowledge and our capacity as much as possible so that we can solve more problems, more effectively and more collaboratively.

It’s about expanding the toolkit.

This is a hard thing to do. Many of us fancy ourselves experts and have spent our careers deploying and refining our preferred data methods, often to great success. To suddenly be told that a method you are unfamiliar with works better that what you have been doing for years (or even decades) is a scary thing.

That is why I admire people like Michael Bamberger. Michael is a respected evaluator whom I ran into at AEA this year (the only AEA conference he’s ever missed was when his plane had to turn around en route to SFO after the 1989 Loma Prieta earthquake). Michael is a giant in the field, and he is still going strong well past the age when most people would choose to rest on their laurels.

I met Michael several years ago when he reached out to me to learn more about data science. He told me that he was intrigued by the possibility and potential of these methods and wanted to learn more about them. Even at the peak of his career he allowed himself to become a student again—to expand his toolkit.

That someone with Michael’s experience and stature is still on a mission for new ideas is a lesson to all of us. His example has helped me approach my work with an open mind and embrace promising new methods. Let’s all commit to expanding our toolkits!