Education Endowment Foundation:EEF Blog: Randomised controlled trials – 3 good things, 3 bad things, and 5 top tips

EEF Blog: Randomised controlled trials – 3 good things, 3 bad things, and 5 top tips

Author
Camilla Nevill
Camilla Nevill
Blog •4 minutes •

The EEF’s head of evaluation, Camilla Nevill, recently contributed to a session at the Behavioural Exchange 2019 conference entitled The RCT monster: how to train your dragon’ – we invite you to watch the session in full. But if you don’t have time, and are looking for the pocket-sized version of what she had to say, read on…

Make no mistake, I am not the Mother of Dragons’. But I have led the EEF’s evaluation work since it was set up in 2011, which has certainly led me into their lair

In the past eight years, we have committed £110 million, including £30m contributed by co-funding partners, to fund the delivery and evaluation of some 200 teaching and learning programmes aiming to improve outcomes for 3 – 18 year-olds from disadvantaged backgrounds

This has included 150 randomised controlled trials (RCTs) involving more than half the schools in England – and many early years and post-16 settings, too – and reaching well over one million children and young people.

Based on this experience, below I have listed: 

  • 3 good things I have learnt about RCTs; 
  • 3 difficult things; and finally, 
  • my top 5 tips for training your RCT dragon’.

3 good things:

  1. RCTs are currently the optimal and least-biased method for estimating, on average, whether something works, when done well. Those last three words are important. Just because a study is an RCT, does not automatically mean it is gold standard’. In actual fact, there is a continuum from the truly outstanding to the totally rubbish. This is one of the reasons why the EEF has set the highest standards for transparency, pre-specification and reporting, and developed our padlock rating system to help time-poor practitioners understand how much to trust a result.
  2. When combined with information on cost and implementation, RCTs provide very powerful information for decision makers (in our case senior leaders in schools and other settings) on the best bets’ for spending their budgets. However, without implementation and theory, RCTs are a black box’, meaning it can be difficult to interpret and decide how to act on the result.
  3. Schools are willing to take part in trials and to be randomised. EEF-funded projects have successfully recruited 13,000+ schools in England, as well as many early years and post-16 settings. We never would have predicted that eight years ago, and actually would have said it would be our toughest challenge. This success is down to our grantees (the project developers) and our panel of independent evaluators both working together with the EEF to communicate the collective benefit of participating in trials, so schools are willing to take part, irrespective of whether they are randomly allocated to the intervention group or to the control group in the RCT.

3 difficult things:

  1. RCTs are not suited to answering all kinds of questions, and there are some things to which schools are just not willing to be randomised. Questions the EEF has attempted to answer through RCTs without success include mixed attainment grouping v setting in schools (too ideological), financial incentives for teachers (too controversial), and changing the secondary school start times to later in the day to accommodate sleepy teenagers (too impractical). For these kinds of questions, we need alternative designs.
  2. Sometimes the answer depends. RCTs tell you what works on average, but how is one school supposed to know how applicable an estimate is to them? I have learned that we need hundreds of schools even to estimate what works on average, so imagine how many we would need to be able to tell what works for different types of schools and pupils? But data archiving and linkage has great potential for understanding variation in outcomes, which is why every single one of our RCTs is archived and linked to the National Pupil Database. Analysis is currently happening on specific groups, including those with special educational needs and children in care.
  3. Decision makers want answers now, but RCTs take time to plan and deliver well. Without excellent planning and communication throughout, RCTs just won’t work. There is no easy solution to this. However, it does mean that it is essential to persuade and demonstrate to decision-makers of the value of RCTs, and to make sure the RCTs we design now are still relevant when the results come out.

So, how can you train your RCT dragon’?

I would suggest you:

  1. communicate the benefits well to your participants, so they don’t drop out;
  2. collect high-quality data on cost and implementation;
  3. register your trial, pre-specify your design and analysis, and report using CONSORT standards;
  4. make sure you set up your RCT to enable archiving and tracking of long-term outcomes (particularly with respect to GDPR); and,
  5. think about context and timing carefully, to ensure relevance at the end.

And remember, sometimes RCTs are not the most appropriate monster to address your research question.

At EEF we don’t only use RCTs; we also use many alternative designs as well. For example, we have uses mixed-methods pilots during early stages of programme development, and are currently using matched comparisons to evaluate things like mixed-attainment grouping of pupils through our School Choices funding stream.

We are also adapting RCT designs to be more suited to testing teacher practices (as opposed to programmes) through our Teacher Choices pilot.

So, RCT dragons need friends too. Follow these suggestions and hopefully yours will give you something truly gold standard’.