Dr. Denise Raquel Dunning, Program Director of the Adolescent Girls’ Advocacy & Leadership Initiative (AGALI) at the Public Health Institute, advocates for a new evaluation model to maximize the impact of social sector initiatives.
It is clear that evaluation is a valuable tool to assess the impact of social sector initiatives. Less apparent but equally important is the correlate – evaluation directly influences social sector impact. And not just in the ways that we often think – where strong evaluations result in replication, expansion, and increased funding for programs that demonstrate results, and weak evaluations lead us to adapt our models or try new ones.
Beyond the obvious ways in which evaluation findings shape the social sector, the very process of evaluation also has the potential to strengthen and sharpen our impact, if we use it wisely. Contrary to widely-held views, evaluation is not a neutral force that objectively assesses impact without influencing the initiative in question. Quite the opposite, evaluation is one of the social sector’s most important forms of intervention. Through interviews, surveys, and measurement of social sector initiatives, we are changing the very people, communities, and landscape we seek to ‘objectively’ evaluate.
Take, for example, the external evaluation commissioned by the UN Foundation and the Public Health Institute to examine the results and lessons learned from the Adolescent Girls’ Advocacy & Leadership Initiative (AGALI). Since 2009, AGALI has improved adolescent girls’ rights, health, education, and livelihoods in Africa and Latin America by empowering leaders and organizations to advocate for girl-friendly laws, policies, programs, and funding. The impact evaluation examines AGALI’s national advocacy and policy results, explores the differences AGALI has made in the lives of adolescent girls, and analyzes how the AGALI model has built the advocacy capacity of leaders, organizations, and networks.
While the principal focus of the AGALI evaluation is to examine the program’s impacts, we are realizing that the actual process of evaluation is reinforcing and strengthening the very results we set out to measure. In analyzing AGALI’s national policy outcomes, the evaluation process is amplifying the program’s original advocacy results.
For example, the evaluation is using contribution analysis to explore how Miguel Angel Lopez, a 2009 AGALI Fellow from Guatemala, advocated successfully for passage of a national policy to ensure specialized care and treatment for adolescent girl survivors of sexual violence. Contribution analysis uses a six-step process to assess cause and effect by developing and testing a theory of change. So in addition to interviewing Lopez and other national leaders who developed and advocated for the policy, the evaluation team will interview the political decision-makers who adopted the policy. During these interviews, the evaluation team will ask about the advocacy process, the need for the sexual violence policy, and its impact on the lives and health of adolescent girls.
Research demonstrates that participating in an evaluation process often leads interview subjects – in this case, the political decision-makers who passed Guatemala’s national sexual violence policy – to reflect on and reaffirm their initial commitment to the initiative in question. Therefore, this evaluation is not only a mechanism to assess the impacts of AGALI’s advocacy, but also an opportunity to deepen policymakers’ commitment to adolescent girls’ needs by ensuring the successful implementation, funding, and monitoring of the new sexual violence policy. For more details about AGALI’s advocacy in Guatemala, click here to read a full case study.
In Liberia, the evaluation uses the Most Significant Change methodology, an inductive approach to analyze AGALI’s direct impacts on adolescent girls and identify unexpected changes resulting from the program. The evaluation in Liberia focuses on AGALI Fellows’ successful advocacy for passage of a national Children’s Law that guarantees comprehensive protection for the rights, health, and education of adolescent girls, including outlawing child marriage and female genital mutilation. In addition to interviewing AGALI Fellows and political decision-makers, the evaluation team will interview adolescent girl leaders of the National Children’s Parliament who played a key role in the Children’s Law advocacy campaign.
As part of the process, the evaluation team will ask the young Parliamentarians to reflect on their experiences advocating for the needs of adolescent girls and becoming leaders in Liberia’s Children’s Law movement. Through that process of introspection, the young women interviewed will reflect on how their own burgeoning leadership was integral to the success of the national advocacy campaign. Yet again, evaluation serves not merely as a strategy to assess impact, but also provides an opportunity for learning, reflection, and growth for both the program and participants. For more details about AGALI’s advocacy in Liberia, click here to watch a three-minute video or click here to read a full case study.
Evaluation is an iterative feedback loop that can continually reinforce and deepen advancements in the social sector. Evaluation has the potential to enrich and reinforce our programmatic goals – but done badly, it may also erode important achievements and progress underway. To that end, we must not only structure evaluations to accurately capture impacts, but also to ensure that the process itself further strengthens and builds upon our existing outcomes.
Social sector investment in evaluation is crucial to ensuring that we maximize our impact – and yet, in a zero-sum funding environment, we often face the seemingly difficult trade-off of choosing whether to invest our limited dollars in interventions or evaluations. But only by recognizing this false dichotomy and understanding that evaluation is intervention will we develop initiatives that are as targeted, effective, and sustainable as they can be. Donors also play a key role in this conundrum – by incentivizing risk-taking, learning, and innovation through long-term investments that explicitly fund evaluation, foundations have the potential to transform the social sector’s resistance to evaluation.