EEF Blog: Scaling education interventions – what are the challenges?

Today the EEF is publishing the independent evaluation of our large-scale trial of ‘IPEELL: using self-regulation to improve writing’. This tested a scalable version of the programme under ‘real world’ conditions in a large number of schools. Here, Head of Programme Strategy Emily Yeomans explores why scaling education programmes is challenging and what some of the things to consider are. 

IPEELL, a programme which aims to improve pupils’ writing, was adapted from the well-evidenced US programme ‘Self-Regulated Strategy Development’ (SRSD). IPEELL (which stands for Introduction, Point, Explain, Ending, Links and Language) combines the writing approach from the SRSD programme – which provides a clear structure to assist writers, as well as self-monitoring and goal setting – with memorable experiences that pupils can use as a stimulus for their writing.

The first (efficacy) EEF trial of IPEELL

In 2012, the EEF funded what’s termed an ‘efficacy trial’ of IPEELL. This tested the programme under best possible conditions. The programme developer, Calderdale Excellence Partnership, delivered the training in 26 local schools, with IPEELL delivered to a target population of struggling writers, who it was thought were particularly likely to benefit from it.

This first trial found strikingly positive results, with pupils making around 9 months’ additional progress in writing compared with pupils who did not receive IPEELL. While the EEF security rating for the trial – 2 padlocks out of 5, meaning we had some, but limited, confidence in the results – necessarily introduced caution, the potential benefits were too significant to ignore.

The promise of this small-scale study immediately brought us to one of the key challenges the EEF faces: how can we best support scaling evidence for impact? How can we take a project that looks like it works in 26 schools and make it work in 260 or 2,600? And – crucially – if we do that, will it still work? Or will making it bigger dilute the impact?

The new (effectiveness) EEF trial of IPEELL

In 2015, the EEF decided to fund a new trial of IPEELL. This would be what’s termed an ‘effectiveness trial’, testing the programme under everyday conditions in a larger number of schools (167).

It’s important to note, by the way, this wasn’t a replication of the first (efficacy) trial. Replication would mean re-testing the same non-scalable model of IPEELL under the same conditions in order to check it could reproduce similar results to the first trial. While there is a good argument for replication studies in education, the EEF’s charitable mission to close the attainment gap impels us to help develop and trial genuinely scalable programmes.

There were, therefore, a number of changes made to the IPEELL model tested in this new (effectiveness) trial in order to make it scalable:

  • First, there needed to be a scalable model for training schools in delivering IPEELL, one which wouldn’t rely solely on its developer, Calderdale Excellence Partnership (CEP). To achieve this, CEP worked to develop a ‘train the trainer’ approach, forging partnerships with Leeds local authority and the Centre for British Teachers (CfBT), to enable others beyond CEP to train teachers and ensure a greater geographic reach. CEP trained trainers, and these trainers then worked with schools across Leeds and Lincolnshire to implement the programme.
  • Secondly, IPEELL was delivered to all pupils in a class, rather than only those identified as struggling with writing. This change to the programme mirrored previous trials of the ‘Self-Regulated Strategy Development’ approach, and made delivering IPEELL in primary schools more straightforward, with pupils able to apply it to all of their writing tasks.
  • Thirdly, the previous model of IPEELL was delivered across both primary and secondary schools, as it was funded as part of the EEF’s funding round dedicated to literacy catch-up for 11 year-olds at the transition. The IPEELL model evaluated this time focused solely on primary schools (Years 5 and 6) to remove the complexity of delivering part of the intervention in primaries and part in secondaries.

What we learned from this new (effectiveness) trial of IPEELL

The independent evaluation of this trial of a scalable model of IPEELL delivered mixed results in writing outcomes. We tested the impact of pupils receiving both one year of IPEELL and the impact of receiving two years of IPEELL:

  • Pupils receiving IPEELL for two years did make a small amount of additional progress, equivalent to +2 months’ additional progress in writing, compared to pupils who did not.
  • But pupils receiving IPEELL for only one year appeared to make less progress than pupils in the comparison group.
  • In addition, receiving two years of IPEELL appears to have had a negative impact on pupils’ maths and reading outcomes, possibly due to curriculum time being diverted away from these subjects towards writing.
  • Interestingly, if we only consider the impact on pupils with low prior attainment – the group who received IPEELL in the first (efficacy) trial – than using IPEELL for two years does show positive results, although the size of the impact is smaller for the scalable model, at around +3 months’ additional progress.

There are lots of possible explanations for differences in the pupil outcomes from these two trials of IPEELL (some of these are explored in my previous blog, ‘Testing, testing, testing! How do we respond when trials produce different results?’).

The key reasons in this case are most likely:

(1) the adoption of a scalable model, meaning IPEELL’s developers were no longer as closely involved in training the teachers; and

(2) implementing IPEELL as a whole-class programme, rather than an intervention targeted at struggling writers.

One key lesson: the importance of implementation

Scaling programmes is difficult. It requires going to a more diverse population of schools and probably requires new trainers to be recruited. Both of these factors may naturally lead to a greater variation in how the programme is implemented and overall lower fidelity to the programme, which in turn may lead to differences in pupil outcomes. 

For schools considering adopting a programme that has been trialled by the EEF, a solid plan for implementation should therefore be at the heart of that decision. It is to help schools with putting evidence to work effectively in their schools that we have published a guidance report, A School’s Guide to Implementation, with clear and actionable recommendations.

Where next with IPEELL?

The findings from this new (effectiveness) trial of IPEELL are challenging. Primary schools wanting to implement IPEELL should consider carefully how closely to match the conditions of the original project tested with positive outcomes in the first (efficacy) trial. This may well include reverting to IPEELL as an intervention targeted at struggling writers, rather than a whole-class programme. Either way, schools should take measures to ensure that an increased classroom focus on writing does not negatively affect reading and maths teaching.

The EEF remains interested in the IPEELL model, both because of the results from the first (efficacy) trial, partially backed up by the positive impact on pupils with low prior attainment in the second (effectiveness) trial; as well as the extensive existing evidence underpinning the ‘Self-Regulated Strategy Development’ (SRSD) approach from which IPEELL is adapted.

One possible next step would be for the EEF to test another, still scalable, model of IPEELL that is targeted to struggling writers and takes care to avoid the negative ‘spillover’ effects in reading and maths we saw in this latest trial. The challenge would then be to develop this scalable model in such a way that implementation quality is high, even when delivered to a large number of schools.