Robbie Coleman and Peter Henderson discuss the latest batch of EEF evaluation reports:
Today the EEF published eight new evaluation reports. Among the group were trials of Philosophy for Children, a thinking skills programme that improved outcomes for Year 5, and the Graduate Coaching Programme, one of the most effective literacy catch-up approaches evaluated by the EEF to date.
But as with every batch, not every approach worked. In this blog, we’re going to focus on two of our most challenging findings to date: two separate peer tutoring programmes that did not have a positive impact on academic attainment.
The first peer tutoring project that reported today evaluated an approach called Shared Maths. The programme paired pupils in Year 5 with pupils in Year 3. Each week pupils spent 20 minutes solving a mathematics problem together, with support from their class teachers. The programme was led by the same team that ran a successful trial of peer tutoring in Clackmannanshire, Scotland that was published in 2011. However, in our trial, conducted in 82 schools across England, pupils participating in the project did not make any more progress than similar pupils not following the approach.
The second project was called Peer Tutoring in Secondary Schools, which was trialled in 10 schools in North Tyneside. Year 9 pupils tutored Year 7 pupils in English for up to three 30 minute sessions each week for 15 weeks. As with Shared Maths, the results of those pupils following the approach were no better than pupils in similar classes not following the approach.
Why is this surprising?
These findings are surprising because international and British evidence collected to date on peer tutoring has been very positive. The impact estimate in the Teaching and Learning Toolkit shows that, on average, peer tutoring approaches have had a strong positive impact on pupil progress. The Scottish trial noted above found that the average impact of peer tutoring was roughly equivalent to three additional months’ progress. A review of international research published last year summarised findings from 72 studies, and found a slightly larger average impact of an additional five months. The evidence on peer tutoring has been of particular interest to the EEF because, although all types of pupils appear to benefit from peer tutoring, there is some evidence that children from disadvantaged backgrounds and low attaining pupils make the biggest gains.
So the new findings, based on two large randomised controlled trials, present a considerable challenge.
How should we interpret these new findings?
Let’s start with what not to do. It would be a big mistake to ignore the new findings, or attempt to brush them under the carpet. Both individual studies appear to be robust, due to the quality of the evaluation design, the number of schools involved and the fact that most schools that began the project stayed involved until the end. But equally, we should not dismiss the international and domestic evidence base that has accumulated over the past 33 years, since the first review of peer tutoring included in the Toolkit was published. As medical research has shown, there are substantial risks in disregarding conclusions based on a series of findings collected over time in favour of new studies that seem to point in a different direction, no matter how interesting the new results.
So how can the findings from our trials be reconciled with the existing evidence? Well, we don’t know for sure. But three possible explanations deserve consideration.
1.Some types of peer tutoring are more effective than others
As Professor Steve Higgins says, “The Toolkit shows what worked, but doesn’t guarantee what will work.” It is possible that the form of peer tutoring tested in both projects that reported today (known in the literature as “cross-age peer tutoring”) is less effective in English schools than in other systems, or less effective with some age groups than others. For example, the Paired Reading programme was tested in secondary schools this time, rather than primary schools as in the Scottish study. This is one of the key reasons the EEF provides funding to test promising approaches that have not been rigorously evaluated in English schools before.
2.How the approaches were implemented affected their impact
The term “peer tutoring” describes a broad range of different approaches. It may be that there was something about the specific nature of versions of the approach that meant they did not have an impact. It is possible that one of the approaches would have been more effective if more guidance had been provided to tutors on effective questioning strategies, or if the number or duration of tutoring sessions had been increased.
3.Common practice in English schools has changed
The trials assessed the impact of peer tutoring by comparing the progress of pupils in classes following the programmes to similar classes continuing with “business as usual”. Assessing what “business as usual” is can be very difficult, but it is possible that what now constitutes business as usual in some schools includes either a form of peer tutoring, or other collaborative approaches. It is possible – though unproven – that adding new peer tutoring schemes will make less difference in classrooms where peer or collaborative types of learning are already common, and show smaller benefits relative to the effects detected in older studies where this type of learning was more novel.
What we will do next
These hypotheses are a start. We don’t know enough to support any with certainty, but hope that they serve as starting points for further discussion.
In the meantime, the EEF will take two next steps:
First, we will incorporate the findings into the Teaching and Learning Toolkit. The Toolkit is a live resource that is updated as new findings from EEF studies and elsewhere are published. The findings from both EEF-funded studies have been included in the updated peer tutoring entry published today, alongside studies from the 2014 review noted above.
Second, we will continue to encourage schools to use the best available evidence to support decision-making, and to participate in projects that will improve the knowledge base we can draw on in the future. If the findings show anything it is that we need to keep asking difficult questions.