How do you evaluate the effectiveness of advocacy and policy work? How do you go beyond counting number of grants provided or number of beneficiaries reached? What are meaningful measures for social change and movement building? How do you include the voices of people you seek to benefit? These were some of the questions the Disability Rights Fund (DRF) grappled with upon initiating a yearlong process to develop a monitoring and evaluation (M&E) system in 2010. This blog post is an overview of M&E work from the perspective of a grantmaker focusing on human rights and outlines learnings from a recent evaluation.”
Yumi Sera, Operations Director for the Disability Rights Fund shares the blog post with Mariane Arsenault, an evaluation consultant for the Universalia Management Group, a Canadian consulting firm specializing in monitoring and evaluation, organizational assessment and strategic management for bilateral and multilateral agencies, private sector companies, and NGOs. In addition to heading up the recent evaluation for the Disability Rights Fund, Mariane has worked on several other evaluation assignments focusing on international development interventions.
…
From the outset, the most important aim was that the M&E System be an extension of our commitment to a rights-based approach and aligned to the principles of the UN Convention on the Rights of Persons with Disabilities (CRPD).
DRF believes that enhancing the participation of persons with disabilities in the realization of their rights is essential to reducing poverty amongst persons with disabilities.
We understand that achieving across the board improved quality of life for persons with disabilities in the developing world is far in the future. DRF’s role is to make grants to change national and local frameworks and attitudes that affect the most marginalized groups in the disability community.
Despite the challenges of M&E in the advocacy realm, we knew that we required a system to collect reliable and valid data about the effects of our grants and to track our collective achievements, as well as to learn from the obstacles we face.
The M&E system we created includes tools and most critically, a logframe with measurable outcome and outputs and SMART indicators. Clear year-to-year milestones guide the work of our Program Officers who provide oversight to grantees through annual Grantee Convenings and frequent contact throughout the life of the projects. Country strategy development and annual assessments provide an in-depth analysis of trends, obstacles, and opportunities for movement building in advancing the rights outlined in the CRPD. Periodic evaluations allow us to reflect on achievements and gaps and change course, as needed.
In late 2012, we tested this system through an independent evaluation conducted by Universalia. This evaluation was one of the first to be commissioned by a grantmaker focusing on supporting disability rights advocacy in the developing world.
A number of challenges for the evaluation were to be expected given the nature of DRF’s work – a grantmaker making modest advocacy grants to disabled persons organizations across numerous developing countries. Data on disability in the countries where we work is scarce, and strong evaluations rely on the availability of quality data. Frameworks for measurement of advocacy achievements are new and yet untested. With DRF’s M&E system and logframe, the evaluation at least had a solid baseline and articulated objectives to measure results.
Traditional evaluation concepts — such as relevance, efficiency and effectiveness – were applied to evaluate hard-to-grasp concepts such as advocacy efforts and movement building. Applying these criteria for DRF’s participatory approach was not impossible, but required adaptation, reflection, and multiple approaches to data collection.
Along with a review of documents, the evaluators conducted focus group discussions with grantees in Uganda and Bangladesh and stakeholder interviews (approximately 90 individuals were consulted). Frameworks based on existing literature were also developed to assess the human rights based approach as well as advocacy and movement building efforts.
The evaluators went into this evaluation with the expectation that they would find very small-scale results, given the fact that many DRF grantees are very small and marginalized grassroots organizations. However, they were surprised by the magnitude of the results achieved.
We have treated the evaluation as an important learning experience – both for the Fund and for the evaluators – and are using the findings for learning conversations among key stakeholders. Here, we share 10 learnings coming out of the experience:
Stick to your principles and values
Being a mission-driven organization, we strive in all that we do to align our work with the principles of the Convention on the Rights of Persons with Disabilities. An evaluation should be aligned with your values.
Seek to empower the people you are working with
We believe that meaningful change will happen only when persons with disabilities are empowered to take decisions that affect their own lives. Thus, it is crucial to involve and listen to the people your intervention seeks to benefit.
Commit your organization to robust monitoring and evaluation
Our governing body, global advisors, and staff were involved in and committed to measuring the results of our work and learning from the process. Creating an organizational culture of learning can help you and your evaluators tread into new territory.
Be accountable to your donors and to the public
The evaluation helped us to report our progress to our donors and to the public. To be accountable for public funds, you should have an independent entity validate assumptions and interventions and point out new areas to explore.
Engage stakeholders — learn from them and educate them
We network with stakeholders in global fora and meetings. Collaborating with others creates a rich environment for movement building that is essential to work for social change. Sharing lessons helps build a larger and stronger movement.
Adapt and refine monitoring and evaluation tools
To some, traditional data collection and measurement may seem too linear or quantitative for the complexity of the social sector, but by articulating a theory of change and logframe, we have strengthened our tactics. Combining quantitative measurements with stories can be a powerful portrayal of your mission.
Be rigorous in your analysis
Collecting data about each grant seems like a micro-managed process, but aggregating and analyzing data illustrate how we are (or are not) having an impact. Developing a shared meaning of concepts and terms across your organization can help you to standardize your monitoring and reporting, especially across diverse populations and geographic areas.
Review and evaluate processes, in addition to results
One of our biggest learnings for our evaluation was that the process, especially in advocacy work, is as important as the intended results. How you succeed or fail can be more of a learning than the attainment of a goal. A theory of change helps articulate and make these processes visible.
Apply the learning to your work
Whether or not we agree with the findings, we know that the evaluation has contributed to a healthy dialogue in our organization about our work. You can use the opportunity of an evaluation to test and question your assumptions and become a better organization.
Contribute to the field of social change
Sharing what we are doing with a wider audience can raise awareness of the rights of persons with disabilities. We also hope that our approach to monitoring and evaluation will contribute to the exchange of ideas about evaluation in the social sector and among grantmakers.
See http://www.disabilityrightsfund.org/evaluation for our evaluation and our recent report on our first years, “One in Seven” http://www.disabilityrightsfund.org/oneinseven.