Skip to content

Digital Impact was created by the Digital Civil Society Lab at Stanford PACS and was managed until 2024. It is no longer being updated.

Linking Outcomes Measurement with Software

idealware logoEvaluating your organization’s programs is not trivial, but it’s necessary—and well within reach. Laura Quinn and Andrew Means bring theory down to shoe leather with a real-time case study that demonstrates not only how to measure nonprofit performance, but also how to incorporate the correct software tools (and use of them) throughout the decision-making process. 

Staff at the YMCA of Metro Chicago struggled to find ways to measure the organization’s programs, a problem common to many nonprofits. What goals were they trying to achieve? What would success look like? What software might help? But by thinking through the metrics and linking them to the right software, staff defined a strategy and tactics for measurement that, in practice, provided insight into the nonprofit’s programs.

Andrew Means

Andrew Means, Director of Research and Analytics at YMCA of Metropolitan Chicago

What did the process look like? For the YMCA of Metro Chicago, it meant translating “fuzzier” mission goals into something that could be measured directly. For example, an after-school program focused on helping junior high school kids mature into adults with fruitful lives—how do you measure outcomes around “fruitful” lives for 10,000 children?

Staff dove deeper to define the term to make it possible to, for example, distinguish between two people, only one of whom is living a fruitful life. One way is to define “fruitful” as having a job that provides a living wage. But since the organization works with junior high school kids, such outcomes won’t occur for a decade when they’re impractical to measure. Another way is to track the high school graduation rate, which research shows is tied to earning a living wage, but that’s also a few years down the road. But staff realized that tracking whether junior high students are keeping pace with grade-level requirements and expectations is relatively easy to do—since research shows a correlation between that and high school graduation rates, it provided a metric connected to the program’s long-term goals.

You can apply this example to your organization by defining the questions you want to answer about your programs and the measures for success you want to evaluate, and identifying the metrics you want to collect to do so. But we alluded to another step: Determine the software and systems you’ll use to collect, track, and report on the data you need. You don’t need software to evaluate your programs, but the right tools can make it easier and more effective to do so.

At Idealware, we heard from a lot of nonprofits unsure what software they might need to evaluate their work, or how to use their existing systems in their efforts. After talking to a number of experts and conducting our own research, we identified five kinds of systems you can use. Using the YMCA of Metro Chicago as an example, here’s a quick overview of these five tools and how they might be incorporated into a program evaluation strategy:

  • Tools for Proactive Data Gathering collect data from the different sources your organization might find useful; for the YMCA of Metro Chicago, this includes tracking door access and card swipes at the organization’s physical facilities, collecting participant survey responses, and using mobile apps to gather data from field staff.
  • A Central Hub of Program Data is the system where you store the information you gather. For the YMCA of Metro Chicago, it’s a custom-designed database that pulls together a number of different data sources into a single location.
  • Auxiliary Data Systems provide storage for all the data you cannot keep in your Central Hub – data that’s too complicated or distinct, like information from Learning Management systems or Scientific Data Monitoring Systems, or that needs to be kept isolated for confidentiality reasons. The YMCA of Metro Chicago keeps most participant-level data for the afterschool programs in a Case Management System, but it also has a Membership Management System with key information used for program evaluation.
  • Tools for Existing Data are also important. In our example, YMCA of Metro Chicago makes heavy use of public data. It has a data-sharing agreement with Chicago Public Schools to understand the progress of participating children and pulls in generalized demographic data about many program participants based on where they live to help staff understand, at a high level, the types of people they serve.
  • Finally, Reporting and Visualizing help people make use of data. For example, YMCA of Metro Chicago staff use the visualization tool, Tableau, to transform pre- and post-program scores into charts that show them as colored dots.

To help nonprofits understand both the process of evaluation programs and the tools that can help them along the way, we turned our research and experts’ advice into a free publication called Understanding Software for Program Evaluation that details the five steps and the wide range of tools that can be used for each and looks at how they fit together into a comprehensive strategy. Download a free PDF now. Evaluating your organization’s programs is not trivial, but it’s necessary—and well within reach. What strategies does your nonprofit use? What tools and systems are helping? Let us know in the comments.

This post was a joint publication: Laura Quinn is the Executive Director of Idealware, a nonprofit that conducts extensive research and provides trainings and written resources to help other nonprofits make smart decisions about software. Andrew Means is Director of Research and Analytics at YMCA of Metropolitan Chicago, a nonprofit committed to strengthening communities through youth development, healthy living, and social responsibility.