EEF Blog: Recruiting to trials – how hard can it be?

Anneka Dawson and Triin Edovald from the EEF’s evaluation team explore some of issues associated with recruiting to challenging trials. 

Six years’ ago, randomised controlled trials (RCTs) were almost unheard of in education in this country. Since 2011, the EEF has committed funding for rigorous and independent evaluations of over 150 different programmes. As a result, we have commissioned more RCTs in education than any other organisation globally, with our trials now accounting for more than 10 per cent of all known trials in education.

Since the very first trials were commissioned, we’ve learnt A LOT about how best to test and evaluate different teaching and learning strategies. The education sector now has a much better understanding of how to run trials than it did back then.

Part of this means being transparent about what hasn’t worked and having open conversations about parts of the evaluation process that have proved problematic. We’ve noticed an increasing appetite in the research and education communities to do this. Dean Karlan and Jacob Appel have made a significant contribution with their handbook of ‘how-not-to’ run RCTs. They group the leading causes of research failures into five categories:

  • inappropriate research settings;
  • technical design flaws;
  • partner organisation challenges;
  • survey and measurement execution problems; and
  • low participation rates.

Today we’ve published a report of a project we have been unable to take to the implementation stage. For a number of reasons, we struggled to recruit enough participants to take part. Taking a project to trial without enough participants would mean that the evaluation would struggle to draw any useful or robust conclusions.

We want to share the lessons we’ve learned from this project with the wider research and education communities. We hope the findings will support our evaluation and delivery partners as they embark on their research projects.

Boarding chances for children was designed to look at the impact of boarding in state and independent boarding schools for children identified as ‘in need’, defined as those that require Local Authority support of some kind. We had wanted to find out if boarding school could improve both their attainment and social-emotional skills. Those identified as ‘children in need’ have even poorer educational outcomes than looked after children and one explanation for this is the lack of stability that they experience. There has been promising evidence on the impact of boarding from the US and Buttle UK’s previous work.

However, receiving enough referrals from local authorities was incredibly challenging and we didn't recruit enough young people to make the trial statistically secure.

Many local authorities were reluctant to refer children to be randomly allocated to receive boarding or not, though it is important to stress that pupils were not to be informed of the possible offer of boarding until they had been allocated to the intervention group to receive it, to avoid the young person being unnecessarily disappointed.

We adapted the methodology of the trial to take out the randomisation element and intended to test the project’s feasibility through a smaller pilot trial. However, the challenges associated with referring children to take part were so great that we were unable to take the project to trial.

Many Local Authorities were concerned that they didn’t have the staff or financial resources to take part. Others wanted to focus on their ‘in-house’ provision and expanding the initiatives they already run.

While such caution is understandable, it remains the case that outcomes for this group of young people are bleak. The EEF continues to believe that we have to be prepared to test innovative approaches and programmes if we want to find out how to break this cycle of unfulfilled potential.

Lessons learned

What have we learnt from this project, particularly about recruiting to challenging trials?

Here are five lessons that will be useful to us, as well as other researchers and delivery organisations alike:

  • Evaluation teams need to have a thorough understanding of the context of an intervention and develop a plausible theory of change. Asking if the experiment is logistically feasible here and now is key.
  • Recruiting participants to trials needs to be a collaborative effort, with delivery teams working closely with evaluation teams and key stakeholders.
  • Recruitment takes longer and needs a larger pool to recruit from than we originally thought. Any ways to increase the number of people eligible to take part in a trial will help.
  • It is crucial to allow a longer recruitment period for challenging projects that need significant buy-in from a range of stakeholders. Buy-in from one group is not enough. For example, having social workers on board without directors of children’s services, or vice versa, will not work for any project.
  • Speak to your stakeholders. Get their perspective on how feasible trialling a specific project is through case studies, focus groups and surveys. For school stakeholders you could use methods such as NFER’s Teacher Voice, or the new Teacher Tapp app. Listen and act upon what they say. 

A second project - Motivating teachers with incentivised pay and coaching - also experienced recruitment challenges and did not go ahead. You can read the 'lessons learned' evaluation report for this here.

We hope these 'lessons learned' reports are useful not only to the education sector, but also to those running RCTs across other disciplines -- and particularly those setting up and planning evaluation projects for the newly established What Works Centre for social care.