MFG Archive

If You Build It, They Will Evaluate: Upping the Nonprofit Evaluation Game

energy_burst_box subYes, we’re headed back to the cornfield. A discussion on tools and methods is one thing. The capacity and capability to use them, as well as the context in which they’re situated, make for an exploration that is closer to real-world implications. The non-starter for many great ideas and theories is their detachment from actual usage scenarios, i.e. detachment from people making them work. We engage that thought with comment from Johanna Morariu and Ann Emery of Innovation Network. Using this organization’s State of Evaluation project as the focal point, the authors take a look at how evaluation is progressing as a discipline with impact.

They say that the only social problems left to solve are the hardest ones. Poverty, health, global food insecurity, the environment. With tight budgets and insatiable demand for their services, nonprofit organizations are in a tough spot. One solution to make the best of this challenging situation is to make sure that nonprofits are being efficient and effective, and to inform decisions with data.

Luckily, there’s a lot of interest in nonprofit evaluation. Let’s be honest: collecting data about nonprofits isn’t a new idea. But it has moved into some exciting and promising new frontiers in the last few years. For all this progress, though, some field-wide challenges continue to impede nonprofits from systematically collecting and using data.

A few years ago we started a research project to take the evaluation pulse of nonprofits: the State of Evaluation project. In the last round, we surveyed 546 nonprofit organizations from across the U.S. We found that 90% of nonprofits are engaged in some type of evaluation (great!), and we also found several areas with room for growth.

Room for growth. We found that few nonprofits are equipped with the staff know-how, time, and money to meaningfully engage in evaluation.

Time, money, and know-how:When asked about barriers to evaluation, 71% of nonprofits said that limited staff time was a significant challenge to evaluation, 61% said insufficient financial resources were a significant challenge, and 39% said limited staff expertise in evaluation was a significant challenge.

Staffing: Overall, 18% of organizations had a full-time employee dedicated to evaluation. This is an area that’s gaining traction among larger organizations (those with budgets over $1M): in 2012, 53% of large nonprofits had a full-time employee dedicated to evaluation. In comparison, 9% of small organizations (those with budgets less than $500,000) had a full-time evaluation specialist—and these small nonprofits make up three-quarters of the sector. [1- see footnote]

Prioritization: In our research project, we gave the nonprofits a list of ten organizational tasks and asked them to rank these tasks in order of importance. The tasks included: communications, evaluation, financial management, fundraising, governance, human resources, information technology, research, staff development, and strategic planning. These are tasks that most nonprofits are engaged in, to some degree. Evaluation was ranked #9, signalling that evaluation competes with other internal priorities, and usually falls to the bottom of the heap.

These challenges add up to very real concerns. Due to limited time and money, do grant reports contain errors? Are nonprofits or funders basing their decisions on erroneous information? Most importantly, how can we leverage evaluation so that it nonprofits can effectively and efficiently achieve their missions?

How can funders support evaluation capacity? Nonprofits and other social sector organizations help us address our most critical issues. But they can’t do it alone. Nonprofits need to be supported by a strong evaluation infrastructure–like technology, systems, and a general know-how among staff. Based on our research and our experience in the field, we have outlined five areas where funders can play a role to ensure that nonprofits are poised to tackle the world’s biggest social challenges:

  1. Invest in technology, tools, and resources: Online survey software, data analysis software, databases, and equipment (e.g., voice recorders to capture focus group data) can be great assets to grantees.
  2. Support data coaching and training. As the saying goes, “Why worry about Einstein’s pen? Thinking matters most.” Tech tools are most effective when combined with ongoing training about data collection, statistics, research methods, databases, data visualization, and other skill sets.
  3. Support both internal and external evaluation staff. Evaluation serves dual purposes: accountability and learning. External evaluators can bring technical expertise and provide perspectives about accountability. Internal evaluators can provide much-needed follow-up and support for a learning agenda. Internal and external evaluators are increasingly teaming together to support the dual purposes, accountability and learning, simultaneously.
  4. Engage nonprofits in conversations with their peers. Grantmakers are in a unique position to convene peer organizations to discuss issues of evaluation capacity and practice. Peer organizations often have much to learn from each other, and may be able to adopt and build upon each other’s evaluation successes.
  5. Engage nonprofits in conversations with their funders. In State of Evaluation 2012, 75% of nonprofits agreed or strongly agreed that they regularly discuss evaluation findings with funders. When discussing evaluation with funders, 82% of nonprofits agreed or strongly agreed that these conversations were useful. These generally upbeat opinions suggest that the nonprofit community is largely open to evaluation and all that it has to offer.

By investing in evaluation capacity building, grantmakers have an opportunity to lay the groundwork for organizations to collect good data, learn from their work, and improve their results.

[1] Source: Roeger, K. L., Blackwood, A., and Pettijohn, S. L. (2011). The Nonprofit Sector in Brief: Public Charities, Giving, and Volunteering, 2011. Urban Institute, National Center for Charitable Statistics. http://www.urban.org/publications/412434.html

 

 

Comments

3
  1. Emmanuel Trepanier says:

    Very interesting post! As an evaluation consultant, I have often encountered reluctuance from nonprofits to engage in traditional evaluations because of the resources they require. Other (somewhat lighter touch yet very rigorous) assessment approaches (e.g., reviews, mapping, etc.) can be very helpful in guiding strategic planning and offer the methodological flexibility that these organizations require.

    Emmanuel Trepanier, Universalia Management Group, Montreal

    1. Ann Emery (Blog post co-author) says:

      Emmanuel,

      Great point. I agree, nonprofits are often reluctant to engage in evaluation because of the resources required – time, money, etc.

      We conduct evaluation through a right-size approach – finding the best level of intensity, data collection methods, and rigor that fit the nonprofit’s unique needs and evaluation capacity. Often the “gold standard” approach for a nonprofit is a single survey, a focus group, or just a logic model or theory of change, rather than a costly and time intensive evaluation. Even the simplest evaluation methods can give nonprofits the information they need for decision making.

      Best of luck in your evaluation work, Ann

      1. Trina Willard says:

        Ann-

        I agree wholeheartedly. Making evaluation “work” for nonprofits entails being flexible and providing solutions that at are scalable, to meet organizations where they are. Founding the methods we use on the scientific underpinnings of research is important, as is our ability to make reasonable adjustments that suit real-world settings. From an nonprofit evaluation perspective, we’re typically gunning for an approach that helps these organizations achieve their primary goal, that is, informed decision-making on the ground.

Comments are closed.