An Alternative To RCTs
In this follow up to his highly debated and recent article - 'Should Controlled Trials Be The Standard for Impact Measurement?' - Peter York proposes a new 'gold standard' for impact measurement.
Last week, in my post ‘Should Controlled Trials Be The Standard for Impact Measurement?’ I examined the effectiveness of randomised controlled trials (RCTs). I presented the case for why the validity of RCTs as the gold standard and Holy Grail of impact measurement is, at the least, up for debate. To enhance the discussion, I would now like to put forth an alternative to the RCT.
It’s time for the social sector to try out the method that medicine, psychology, business, economics and ecology have been using for a long time: the observational cohort study (OCS). Observational studies are at least as valid, and typically more generally applicable, than controlled studies. An OCS follows a group – or “cohort” – of people with defined characteristics to determine the incidence and level of an outcome, resulting from different exposures to a condition, treatment, program, event and/or set of experiences. Because the type, level and rate of exposure are determined before the outcome, cohort studies are temporal in nature, providing strong scientific evidence from which one can assess causality.
For over two decades I have conducted program evaluations for foundations where we used an OCS method to longitudinally measure and track the same outcomes across multiple grantee organizations that all worked within one broad program area (e.g., out-of-school time youth development). While the group of grantees was all within one broad program area, building their programs from the same pool of program “ingredients,” the specific program “recipes” of each grantee were not the same due to community context, population, and organizational resource and capacity differences. We measured the naturally occurring “recipe” differences in order to make comparisons between beneficiaries across their larger cohort. Through these OCS evaluations, we were able to analytically determine what worked, for whom, when, where and how. Because we built shared datasets of a large and growing cohort of beneficiaries, the analysis provided rich insights that were applicable and generalizable much more quickly, giving all leaders the ability adapt and improve their programs. And, by the way, these evaluations were not as costly as controlled trials.
It is currently possible to leverage technology to take OCS even further. The social impact sector now has the kinds of technology and tools to rapidly gather and exponentially grow shared data across a cohort of similar program beneficiaries. We can also now bring computer science analytic techniques our field has yet to apply – e.g., machine learning/artificial intelligence algorithms – that can take us beyond descriptive answers about what just happened, and provide both predictive and prescriptive insights for those on the front lines of social change. For example, at Algorhythm, we just completed a retrospective OCS project for the State of Florida’s Department of Juvenile Justice, applying machine learning to a “cohort” dataset of over 140,000 cases. Through a collaboration between data scientists and social scientists (criminologists), and the application of machine learning algorithms, we built predictive models for recidivism, and more importantly developed prescriptive models that can be used to tailor interventions that will reduce the odds of re-arrest; we improved the accuracy of Florida’s current risk assessment from 61% to 82%. Our next step is to build dynamic risk assessment and intervention planning applications for probation officers, as well as evaluation systems for supervisors, all off of the same “cohort” data. This is the potential of what I will call “technology-enhanced” OCS. Technology is now allowing us to develop shared OCS evaluation systems, inclusive of up front assessment and planning applications, that should be made available, accessible and affordable to all.
The social impact sector now has the kinds of technology and tools to rapidly gather and exponentially grow shared data across a cohort of similar program beneficiaries.
I know that there will be some sector leaders, social scientists and evaluators who will continue to argue that controlled trials are the best method. For them, and everyone, I would say that we should at least admit their limitations, especially because their findings cannot typically be replicated in the real world, and place OCS at least tied for gold. If there are those who cannot admit a tie, then the best the sector can hope for is that everyone stops placing observational studies so far behind in the race for rigor as to undermine the power of the generalizability and real world application of the observational approach. For those who weren’t aware of OCS, I hope I have provided an argument and shared a method for how we can advance learning here and now, while those who are conducting controlled trials wait a few more years, with fingers crossed, for their findings to come in; and they probably won’t like the answer.
Many thanks to Peter York for this two-part series closely examining randomised controlled trials and their alternatives. If you haven’t read his first article yet, we thoroughly recommend you do so here.