Skip to content

Two Views on the Research-Practice Gap

MFG Archive, Opinion

Emily Lin explains why bridging the research-practice gap must be a multi-directional effort.

 

In human services, the idea of bridging research and practice is, if I may quote a classic piece of art house cinema, “so hot right now.” With advocates ranging from venerable, traditionally academic corners like the Center for the Study of Social Policy to the more pop-culturally-named Moneyball for Government, efforts to bring data and research evidence to bear on complex social issues are on the rise. They’re becoming so popular, in fact, that the Gates Foundation has funded an effort by Idealware just to document all of these “results data” efforts!

 

At nFocus Solutions, we recognize that we have a unique role to play in this movement and hope to champion the belief that all social sector stakeholders – including vendors, researchers, practitioners, funders, policymakers, and constituents – have valid, and sometimes competing, perspectives to contribute. In this post, I give two examples of what the research-practice gap looks like from these different perspectives to highlight why bridging the gap has to be a multi-directional effort. 

 

Minding the Gap: Two Examples

Consider the following excerpt from the What Works Clearinghouse, describing Accelerated Reader, a program with “potentially positive effects on general reading achievement:”

 

“A primary best practice recommendation for use of Accelerated Reader is a dedicated 30–60 minute block of time for reading practice. Depending on the ages and skill levels of the students, three activities may occur during a reading block: reading texts to a child, reading texts to a child using a paired-reading technique, or independent reading by the child.”

 

This recommendation emerges from a review of two studies of Accelerated Reader (Bullock, 2005; Ross, Nunnery, & Goldfeder, 2004) that meet the What Works evidence standards, as well as tens of others that don’t meet the standards, as determined by a review process that is transparently detailed in a 91-page document on the What Works Clearinghouse website.

 

Now consider this quote from a service provider I interviewed, who works in a collective impact initiative in the southeastern U.S.:

 

“I’m supposed to be tutoring these kids in reading, but it’s really hard to learn when you’re hungry. So I usually spend the first 20 minutes of every session feeding them.”

 

This service provider probably has nothing but respect for the many researchers and funders whose efforts are reflected in the documents on the What Works website, and would probably very much appreciate the time to read through and understand the standards in the What Works handbook. But when she’s given 45 minutes per week and told that that should be enough to “get results” for her kids, yet she ends up spending the bulk of that time helping to get her students’ basic needs met, the idea of implementing evidence-based practice seems nearly impossible.

As a second example, consider this list of benchmarks for effective practice from the Center for Evidence-Based Mentoring:

 

  1. Program contacts the mentor and mentee at a minimum frequency of twice per month for the first month of the match and monthly thereafter.
  2. Program documents information about each mentor-mentee contact, including, at minimum, date, length and nature of contact.
  3. Program provides mentors with access to at least two types of resources (e.g., expert advice from program staff or others; publications; Web-based resources; experienced mentors; available social service referrals) to help mentors negotiate challenges in the mentoring relationships as they arise.
  4. Program follows evidenced-based protocol to elicit more in-depth assessment from the mentor and mentee about the relationship and uses scientifically-tested relationship assessment tools.
  5. Program provides one or more opportunities per year for post-match mentor training.”

 

The Center for Evidence-Based Mentoring, led by Jean Rhodes at the University of Massachusetts-Boston, is a rare leader in both producing rigorous social science as well as translating it into user-friendly documents and checklists like the one above. Their ability to cross boundaries has resulted in national attention and funding, such as a recent $2.5M grant from the U.S. Department of Justice’s Office of Juvenile Justice and Delinquency Prevention, as well as fruitful partnerships with leading practice organizations, like MENTOR.

 

But how does that work apply to the words of this afterschool club teacher?

 

“She finally started trusting me, but then the program was over. I tried to re-connect with her through her social worker, but she had been sent to live with her grandparents, who moved, and her phone number stopped working. The next time I heard of her she had been arrested.”

 

This teacher would likely appreciate the work of the Center for Evidence-Based Mentoring, but the applicability of the checklist above to her efforts to connect with this particular young person breaks down at Step 1. The teacher had a relationship with a young person who was real and whose life meant more to her than just “tracking outcomes,” but when she looked for help in dealing with the challenges she was facing in trying to develop a deeper mentoring relationship with the girl, the “evidence” had little to offer.

“We know that the problems – and therefore solutions – in bridging research and practice belong to all of us.”

If it takes two to tango, let’s have a dance party.

These examples demonstrate just a small piece of the complexity of trying to bridge the research-practice gap from any single perspective. The push to make social policy and human services practice more evidence-based is laudable and probably our best bet for making real progress on tough, complex social issues, but it is difficult for everyone involved.

 

Researchers who care about practice constantly grapple with the challenges of designing and executing research that generates reliable, scientifically valid findings that hold across multiple, varying contexts – challenges that include a lack of support from academic institutions (which are often the researchers’ employers) as well as a lack of a robust funding infrastructure for intersectional work.

 

At the same time, though, the everyday work of our teachers, coaches, tutors, case-workers, and front-line social service providers who work with America’s most economically disadvantaged young people is highly contextual, driven by the idiosyncrasies of the people and situations that present themselves each day. And too often, those immediate contexts seem far enough away from the world of research that practitioners decide that their already-stretched-thin time is better spent jury-rigging their own local solutions than reviewing research literature to glean applicable lessons. 

 

Here at nFocus, as we put the final touches on Trax8, the next version of our community data management tool, we are driven by the vision of building products that help align “best” and “real” practices. We know that the problems – and therefore solutions – in bridging research and practice belong to all of us, whether vendors, researchers, evaluators, funders, or practitioners (the role of constituents and beneficiaries being worthy of an entire series of blog posts in and of itself). We’d love to give space to all of these voices to interact with each other. In future posts on the nFocus blog, I will highlight a few promising collaborations between researchers and practitioners from around the country as a way of starting to chart the road forward.


This post originally featured on nFocus’ blog. For further reader, we encourage you to read their last post on Markets For Good, entitled ‘Vendors For Good.

 

To stay up to date with the latest Markets For Good articles and news, sign up to our newsletter here. Make sure that you are also following us on Twitter.