Skip to content

Creating A Culture of Continuous Improvement Based On Data

MFG Archive, Opinion

Beth Kanter returns in her latest column, describing how nonprofits can develop a culture of improvement through the application of data.

 

I’m always on the look out for useful and thought provoking resources on how nonprofits can use data to make better decisions that lead to greater impact. So, when Mary K Winkler, one of the nonprofit data nerds I follow on Twitter and Senior Research Associate at Urban Institute’s Center on Nonprofits & Philanthropy specializing in performance management told me via Twitter she had just published a new guide on this topic, I had to check it out.

 

Moving Beyond a Culture of Compliance to a Culture of Continuous Improvement is a resource guide to help leadership, management, supervisory, and data-focused staff in Head Start and Early Head Start programs (1) understand how data, including data they already collect, can help them achieve their program goals; (2) learn techniques for fostering a culture of learning in their organization; and (3) increase their ability to identify and address gaps and continuously improve their programs. It was designed to complement existing technical assistance resources through tip sheets, examples, and links to multiple resources.

 

I was most interested in Part 2 (pages 17-21) because I think the advice is applicable to organizations beyond those managing head start programs. This section of the report covers new ways of thinking about organizational culture based in continuous improvement based on feedback. It speaks to establishing culture norms among staff of curiosity, reflection, and trust. It outlines the practices and skills needed to create a learning culture. Here’s what I learned:

 

Definition: A culture of continuous improvement

The term means learning culture. The term “continuous” that the organization has create a virtuous cycle of feedback that repeatedly inspires staff to reflect on what is working and what can be done differently to get better results. This process of reflection is embedded in the organization’s working style, not a random moment of inspiration after a program evaluation is completed. Everyone on staff understands that the questions are the best teachers and in an effort to sustain learning articulate questions and seek answers to those questions.

 

Organizations that have this type of culture do not play the blame game if something needs to be improved. They have a created a safe space for staff and program participants and other stakeholders to give feedback, reflect, ask questions, and think creatively about solutions. Senior leaders model the skill that Edgar H. Schein calls “Humble Inquiry” – the art of asking questions based on curiosity and building trust.

 

Cultural Indicators

The report describes the indicators below as hallmarks of a culture of continuous improvement.

 

Data C

 

The report points out that Head Start Programs often have to balance compliance with creating a culture of continuous improvement, not an easy task. Not only do organizations need data collection systems, but also systems for observation, learning, reflection, and action or as the report describes “systems that help us identify and solve problems proactively instead of always reacting.” The report offers up the metaphor of “how shift from fighting fires to innovation.”

 

The report also talks about an organization’s cultural readiness to switch to a culture of continuous improvement, using a blog post I wrote about being data-informed for inspiration. It maps out stages of change, but also recognizes that organizations may be in different stages at the same time:

 

  • Dormant: At this stage, the organization does not know where to start. Data collection may occur from time to time, but there is no formal reporting. There are no data systems in place, such as dashboards or simple collection methods. Staff are often overwhelmed by the thought of measurement and it falls to the bottom of the to-do list. Alternatively, there may bean emphasis on collecting more data than is necessary, but no one relates it to decision making. There is not a reflection process for analyzing success or failure for future use.
  • Testing and Coordinating: At this stage, the organization is regularly collecting data, but it is stored across different spreadsheets and collected by different people or departments. Data are not linked to organizational results or mission-driven goals across programs. Discussions on how to improve results are rarely part of staff meetings. Scaling and Institutionalization: At this stage, there is an organization-wide system and dashboard for collecting data that are shared with different departments. There are different views or levels of detail for senior leaders, line staff, or other stakeholders. There are periodic (e.g., weekly, biweekly, monthly, or quarterly) check-ins to evaluate what is working and what is not. The organization provides training and professional development for staff to learn how to use measurement tools.
  • Empowering: At this stage, performance indicators are used across programs throughout the organization. There is a staff position responsible for setting the overall agenda for data collection and reporting, helping staff understand data, and assuring that systems and timelines are successful. All staff, however, are empowered and expected to check, apply, and interpret their own data. In addition to periodic check-ins, the organizational dashboard includes goal-oriented performance metrics. The organizational dashboard I hared across departments and there is a process for analyzing, discussing, and applying results. Data visualization techniques are used not only to report the data analysis but also to reflect on best practices culled from the data.

 

The report also includes a reference to this excellent tool for evaluating an organization’s capacity to do evaluation activities.  The report identifies these criteria:

 

Core Competencies of Organizations: With a Culture of Continuous Improvement

  • Our organization measures outcomes (changes in participant condition, behavior or knowledge), not just efforts (quantifiable activities or services delivered).
  • Our organization can identify which indicators are appropriate for measuring how we work.
  • Our organization has clarity about what we want to accomplish in the short term (e.g., one to five years) and what success will look like.
  • Our organization ensures that staff have the information and skills they need to successfully engage with data for program improvement (e.g., access to resources and training)
  • Our organization has staff who are experienced in data collection, data use, and different stakeholders’ information needs.
  • Our organization has staff who know how to analyze data and interpret what the data mean.
  • Our organization values learning. This is demonstrated by staff actively asking questions, gathering information, and thinking critically about how to improve their work.
  • Leaders in our organization support data use to identify areas of improvement. Our organization is capable of effectively communicating about data and results (both positive and negative) within and outside the organization.
  • Our organization promotes and facilitates internal staff members’ learning and reflection in meaningful ways regarding data use, planning, implementation and discussion of findings (“learning by doing”).
  • Our organization modifies its course of action based on findings from program data.
  • Mangers look at program data as an important input to help them improve staff performance and manage for results.
  • Findings from program data are integrated into decision-making when deciding which policy options and strategies to pursue.

 

For people who are in the data for good space, technical work and “janitorial” work are only a part of their jobs. Understanding organizational data culture or creating a culture of continuous improvement based on data is a hot topic.  It’s on the agenda at Do Good Data Conference later this month (I’m co-facilitating the ending plenary). It’s also on the Data on Purpose Conference at Stanford in June. For some organizations, it is more zen – it’s about beginning it and continuing it as Laura Quinn from Idealware points out in her latest Markets For Good blog.

 

Does your nonprofit have a culture of continuous improvement based on data?  What does it look it?  How did it get started?


Many thanks to Beth Kanter for her latest column piece. Be sure to comment below and of course, follow her on Twitter, @Kanter.

 

To stay up to date with the latest Markets For Good articles and news, sign up to our newsletter here. Make sure that you are also following us on Twitter.