Analyse und Planung mit Computer im Büro und vielen Diagrammen an der Wand

Field Notes

Building a Culture of Evaluation: Empowering the Social Sector to Put Data to Good Use

Amanda Babine, Director of Evaluate for Change, explains how social sector professionals can learn to embrace a culture of data use and evaluation to advance their missions.

Picture this: I’m standing in front of a room of dozens of social sector professionals, ranging from Executive Directors to program assistants, and I ask, as I often do, “Raise your hand if you have taken a statistics course before.” As usual, a majority of hands go up.

“Now keep your hand up if you enjoyed that class.” Hands shoot down almost as fast as they were lifted. Suddenly, we have only a couple people with raised hands and the room is filled with laughter.

As the conversation continues, we touch on their background in statistics or evaluation and quickly find that an overwhelming amount of participants found these courses “boring,” “dreadful,” or “difficult” all their words, not mine.

The language they use describes a common experience with required statistics courses that focus on fundamentals. While important, this type of formal education can seem insignificant and unrelated to working in nonprofits across the country.

This is an all-too-common narrative that sets the stage for a day-long training on building a culture of evaluation among social sector professionals. After spending time assessing prior knowledge and the level of training that participants have, the question remains: are they truly prepared to effectively measure their impact? And if not, what needs to change in order to create a more data-driven and results-oriented sector that makes a measurable difference?

Rooting the type of formal education I speak of in real-world practice is just one way we can increase the motivation of tomorrow’s nonprofit workers to leverage data and evaluation. As grants are increasingly tied to outcomes, the social sector will be pushed to hire and maintain workers with a strong evaluation skill set. Not only creating but requiring curriculum that tackles how to measure impact among people, communities, and social issues is a step in the right direction. But this type of institutional change will take time, making it crucial to create alternatives.

Currently, one of the most common solutions to filling the shortage of staff with evaluation skills is to hire evaluation and data science consultants. There are many data scientists and program evaluators specializing in working with nonprofits; however, relying on their services is only a quick fix, not the long term solution that the social sector needs. While these specialists have strong technical skills, they may be disconnected from the experience of working with specific populations, and they may misjudge an organization’s ability for implementing an evaluation.

For years, I personally worked closely with both evaluators and nonprofits throughout New York City, serving as a resource to the mid-sized organizations that needed to collect and analyze data but didn’t have the expertise required to execute the project.

I’ve found that the relationships between evaluators and organizations can often become strained by not having clear expectations and an understanding for the nonprofit’s ability to leverage data. Most staff members I worked with showed an interest in learning the required skills, but they never had the opportunity to be properly trained to collect and report on data. Adopting the practice of tracking internal processes and overall outcomes has been shown to help organizations become more effective entities. So why are so many organizations still overlooking the techniques that us data nerds find so important?

In my experience, there are two mutually influencing factors behind this gap: commitment and capacity. While the technical skills are the fundamentals needed to measure true impact, there is a crucial step that most evaluators miss when they work or consult with organizations — culture creation. Organizations must strive to create a culture that understands the value and reward in implementing and tracking data within their organization. Like any new initiative, efforts to implement internal evaluation will almost always fall short without buy-in from the organization.

There are also other internal organizational barriers to culture change, such as concerns among staff who might be intimidated by math or might be worried about how the findings will be used against them, which happens when organizations have a punitive process. Once cultural issues like these are addressed, the learning can happen!

For the last three years, I have had the pleasure of directing an organization that focuses on building the evaluation capacity of social sector organizations. At Evaluate for Change, we do not consult, but rather teach. We use curriculum to give adult learners engaging content, real-world examples, and plenty of time to work on practicing the skills they need to measure the impact of their organizations.

The process we take isn’t solely about the technical skills — if that was the case, it may be a bit easier. Our pedagogy is rooted in the idea that organizations need a more holistic approach to implementing a culture of evaluation, and we tailor our classes to accommodate that. We take the fundamentals that most people have blocked out from their first introduction to statistics and reteach it in a way that is relevant and engaging. These are the skills that people in the social sector need to be data-savvy and understand the importance of using evaluation for advancing their organizational missions.

Let me be clear, Evaluate for Change is not the only alternative for helping the nonprofit workforce gain the skills they need to measure and grow their impact.

Currently, we rely on a fractured system that is not self-sustaining — the paradigm of working separately as data scientists and social sector professionals needs to shift to allow for a more integrated approach. It doesn’t matter if you’re an evaluation consultant, teaching statistics as a college professor, or are an enthused social sector professional the key is to emphasize the need among social sector organizations for both commitment to using data, and capacity for implementing and sustaining sound data use practices. The resulting culture of evaluation can empower social sector professionals and their organizations to leverage their true impact.


Special thanks to Amanda Babine for sharing her expertise on how to empower nonprofits through a culture of data use and evaluation. Visit the Evaluate for Change website to learn more.

To stay up to date with the latest from Markets For Good, sign up for our newsletter and follow us on Twitter.

 

Comments

6
  1. This is a culture change for the non-profit sector as a whole. You make a good point about grant applications increasingly looking for evidence of “impact.” The healthcare industry is also pivoting away from a “fee for service” model to outcome based reimbursements.

    I believe that introducing data gathering processes and evaluation protocols into organizations at this point is a way to start a culture shift that relies more heavily on evidence based decision making. Most non-profits can’t afford a trial and error model of program development, service delivery, hiring and fundraising.

    This is something that needs to be done in-house to be truly effective and organization specific.

  2. Jenna Boyle says:

    As a social worker at a Federally Qualified Health Center, our work is already data driven! I’m one of the kids who liked statistics and now I get to do a lot of the quality based work at our clinic. As a rural practice in Vermont, we rely on the funds we get by showing the good work we do, or by how we are (or plan to) improve.

    Fortunately, in the medical world, we have electronic medical records that have built-in or add- on technology to report the data. The challenge can be to make sure the information we get goes to the places where reports are seeking it, and that the data is clean. It can be exciting to start with a ‘problem,’ be able to track the improvement and show your staff what their efforts have produced!

  3. Laura Punnett says:

    You’ve made a number of important points here. I’ll pick up on one: that this is not only about technical skills. A numbers geek (including myself here) can use data to answer a question – but first someone has to frame which questions need to be answered, in what sequence and with what types of information. So of course it isn’t enough to hire a statistician; this requires content knowledge about the problem to be solved, as well as what each stakeholder needs to know at the end. The entire program needs to be designed from the beginning to embed meaningful data collection points throughout the process.
    This includes qualitative evaluation, as well, which is often essential for understanding whether the program components worked as desired, and why or why not. In the public health literature, there has been a lot of discussion recently about what constitutes the right or best or legitimate form of evidence to underlie “evidence-based” practice. For evaluating outcomes at the level of the individual participant (e.g., patient or client) we need individual-level analyses. To understand institutional-level obstacles to and opportunities for change, other methods are needed (such as the models and constructs of “Dissemination & Implementation science”). Both levels of assessment are essential for determining how to move forward to fully effective best practices.

    1. Annette Fitzgerald Hume says:

      Excellent point Laura, qualitative data is perhaps even more important because often the wrong problem is being addressed. This is why its so important to have consumers on social service boards of directors.

  4. Amanda Babine says:

    It’s great to hear a wealth of knowledge from the practice of using data in the healthcare and public health field. Thank you for sharing. It’s great to know those with extensive statistical backgrounds understand the importance of having content knowledge and engaging stakeholders in the process of evaluation.

  5. Lawrence Hunt says:

    Thank you
    for sharing your thoughts, Amanda. I enjoyed reading your article and the
    comments made so far.

    Besides
    program and organization effectiveness or impact, I think this data collection
    and analysis issue also extends to board performance self-assessment which is a
    common practice in the nonprofit sector. Designing a data collection instrument
    is a job for a professional with years of experience in that field, yet you
    find on-line design-it-yourself board self-assessment survey guidelines. A
    skilled professional would be concerned by the potential for an unacceptably
    high level of response bias in the data collected using a DIY survey and, as a
    result, unreliable results produced from any analysis of that data.

    Other
    self-assessment tools have clear design faults in their surveys that could make
    the results they produce unreliable. It is also common for results to be
    produced by a simple aggregation of the “score” for each question with no
    weighting applied that corresponds to the relative importance to management
    performance of the factor being measured and no mention made of the expected
    level of response bias in the results.

    So, I agree
    with you that there needs to be an awareness of these data collection and
    analysis issues within the management of every nonprofit that is involved in
    these processes so that the reliability of the results obtained can be
    assessed.

    The survey
    used for NPdirection’s on-line board self-assessment tool (https://www.npdirection.com/home) took 12 months to develop, test and
    refine and, like all good instruments collecting subjective data, the average
    level of response bias was scientifically estimated and a bias correction
    factor was built into the process that produces the results.

Comments are closed.