​EEF Blog: Why does the EEF fund pilot projects?

Today we published three new independent evaluation reports of EEF-funded pilot projects. Here, Jonathan Kay and Stephen Tall explore why we fund pilots and what we can learn from today's findings.

The vast bulk of the EEF’s grant-funding is in commissioning the delivery and evaluation of projects as randomised controlled trials (RCTs). We fund two types, both of which aim to give us a robust estimate of the impact of a programme by comparing the outcomes of students who received it with a control group of students who didn’t:

  • efficacy trials test whether a programme can work under ideal or developer-led conditions in a large number of schools, nurseries or colleges; and
  • effectiveness trials test whether a programme can work when using a delivery model that can be scaled across the country under ‘real world’ conditions.

A necessary condition for the EEF funding such trials is that the programmes to be tested are well-defined and have promising prior evidence. There is good reason for this focus: the large-scale RCTs which are our bread-and-butter – 114 to date – are intensive and expensive. It would be a poor use of everyone’s time and resources if we funded projects before they were ready for the rigours of a trial.

However, there is another category of trial – pilot studies – which we do occasionally fund. These pilots are much smaller scale than our RCTs and in particular areas where prior evidence is limited: for instance, the application of neuroscience or digital technology in the classroom, or teacher CPD, or research use in schools. In these areas, we have been willing to fund pilots – 20 to date – which develop approaches and test their feasibility before we commit to funding an RCT.

The evaluation reports that we are publishing today – Spaced Learning, Evidence for the Frontline, and Video Observation and Coaching – are all pilots which help build the evidence base in these areas.

Unlike our efficacy and effectiveness trials, pilots do not aim to provide a robust estimate of impact. Instead, independent evaluation teams develop a theory of change for the intervention before the project begins and then conduct a process evaluation to understand if the expected changes in pupil and teacher behaviour actually occurred. This can include collecting feedback from participants about perceived impacts and monitoring how the programme was implemented by teachers.

Rather than telling us whether the approach worked, they help grantees to develop their programmes and give us valuable feedback on how the intervention might be refined ahead of an RCT, provide confidence that the approach can be delivered by teachers, and allow us to assess if there is sufficient evidence of promise to suggest it may impact on attainment.

The EEF is committed to publishing the results from all of our evaluations, but it isn’t easy to communicate meaningful messages to teachers about the findings from pilots given that these trials are too small-scale to properly measure the impact on student outcomes.

So what can we learn from these latest three pilot projects?

Getting an innovative school-led project ready for trial

One of the key benefits of pilots is that they allow us to fund early stage ideas that the wider evidence suggests might be effective in order to test if it is ready for an RCT.

The Spaced Learning pilot was led by the Hallam Teaching School Alliance in Sheffield. We are always keen to support school-led projects – teachers are constantly innovating, and we want to help them to test and then mobilise their most successful ideas. But we know they often need support to expand their ideas into other schools and areas while maintaining focus on their own students.

This project took a principle supported by evidence from two scientific fields, neuroscience and cognitive psychology, and applied it in a programme to prepare Years 9 and 10 students for GCSE examinations. Intensive lessons focused on chemistry, physics and biology curriculum content were interleaved with unrelated physical activity and repeated. The pilot allowed Hallam to test out (albeit at small scale) different versions of the programme to gain insight into which might be most promising.

The good news is that this pilot has been independently evaluated as ready for testing in a rigorous trial and we’re in discussions with the Hallam team about doing just that – watch this space.

Testing new approaches in key areas

Another benefit of conducting pilots is that they allow us to build up our understanding of ‘what works’ (more accurately, what might work) in areas where relatively little is known.

The Evidence for the Frontline pilot was developed by Sandringham School in Hertfordshire and the Institute for Effective Education at the University of York, with support from the Coalition for Evidence Based Education. It sought to do something quite new – to give teachers in over 30 schools ready access to expert advice from academics, enabling them to ask any question they wanted and get answers rooted in the best available evidence.

This form of ‘brokering’ of research to support teachers in improving their practice is of real interest to the EEF, and we are funding a number of trials (including some RCTs) to increase our understanding of how this can best be achieved.

In this particular case, the independent evaluator reports some positive results, but concludes that an RCT is not likely to be the best way to assess its impact. This is because it would be too difficult to measure accurately the impact on student outcomes of a service that allows any teacher across both primary and secondary schools to ask academics questions about any subject or teaching method.

Nonetheless, the core principle is one we think still has real merit, and we will be actively looking at ways to develop this approach (and to evaluate it) as part of our work to put evidence to use in schools.

Developing and improving existing programmes

A final and important role of pilots is the way that they help to refine and improve existing programmes by testing through an independent evaluation how they can be used to best effect by teachers.

IRIS Connect was a pilot focused on a video technology system, IRIS Connect, found in many schools, but often used in an unstructured way. Over the course of the EEF-funded pilot, the teams at Whole Education and IRIS were able to develop a programme of film clubs which support teachers in using the technology to improve their use of dialogue and feedback.

As the EEF Toolkit highlights, the crucial point about the impact of digital technology on teaching and learning is how it is used, not simply its presence. In this case, the pilot supported the development of a CPD model that initially – according to the evaluation report – seems feasible and popular with teachers, and which we will consider testing in the future.

So there you have it – three pilots, all with interesting findings, and from which we have learned much about their potential.