Education Endowment Foundation:EEF Blog: Updating the Toolkit

EEF Blog: Updating the Toolkit

Author
Jonathan Kay
Jonathan Kay
Head of Evidence Synthesis
Blog •3 minutes •

Jonathan Kay, Research and Publications Manager at the EEF, discusses the latest updates to the Teaching and Learning Toolkit

Today we have updated six strands of the Teaching and Learning Toolkit – adding the latest research to Digital Technology, Mastery Learning, Oral Language Interventions, Peer Tutoring, Social and Emotional Learning, and Summer Schools. The Toolkit summarises existing research, outlining the cost of the approach, the security of the evidence, and the impact of the approach (in additional months’ progress)

What does it mean to update the Toolkit?

The Toolkit is created by searching through evidence in a systematic way. Instead of picking and choosing which research to include in a strand, we run a search of pre-specified databases with pre-specified search terms. The only way a study can be excluded from the Teaching and Learning Toolkit is if it does not meet pre-agreed criteria, such as the age of the study or the appropriateness of the measure. More detail on the processes underpinning the Toolkit can be found on our about the Toolkit pages.

When we update the Toolkit, we re-run these searches and see whether there are any new reviews that can be included in the strand. This is important. More trials than ever are happening in education, and we want to make sure that research conducted since the Toolkit’s creation is not ignored

Once we have found the new reviews, we update the impact, security and cost rating of each strand, and make sure that the text reflects the new evidence. In the first few years of updating the Toolkit, the impact and security of the strands would sometimes change. For example, Teaching Assistants moved from 0 months to +1 month additional progress as new research was added, which showed the positive impact of TA-led structured interventions. This time, despite the addition of 31 new meta-analyses, none of the headline figures have changed

What have we learned?

This doesn’t mean that there is no valuable new evidence within the update. We have reviewed and updated the text of each strand, and provided details of the new meta-analyses in the technical appendices.

It is also interesting to see where new reviews are being added, even if they are not changing the overall figures. In this update, 21 of the 31 new meta-analyses are within the Digital Technology strand. By contrast, there are only two new meta-analyses on Oral Language Interventions. It is, perhaps, unsurprising that more research is being conducted on Digital Technology – but it is interesting that the latest reviews have not increased the average impact of the strand. We might have assumed that as technology progresses, so do learning gains. An initial look indicates that this may not be the case.

The EEF security rating is not simply determined by the quantity of research

Why hasn’t the security improved in updated strands?

You may wonder how it is possible that 21 new meta-analyses could be added to the Digital Technology strand, without the security of the strand increasing. This is because the EEF security rating is not simply determined by the quantity of research. We also consider the quality, consistency and recency of the reviews that make up the strand – for example, strands can lose a padlock if less than three of the reviews have taken place in the last three years. In the case of Digital Technology, there is a lot of recent, high quality research, but the strand has lost a padlock because although the average impact is four months – the reviews in the strand have impacts ranging from 2 months to over a year of additional progress. Regardless of how much evidence is added to a strand, if the variation between the impacts of the summarised studies is large, the strand cannot be awarded the highest security rating

The future of the Toolkit

We continue to examine how to improve the resource. That’s why over the next few years we are launching a project to improve the Toolkit further. We want to be able to tell schools more about the individual studies that make up the evidence within each strand. In the longer term, this will mean that we can help teachers to understand, not just what the average impact of mastery learning is in the classroom, but also how that impact varies depending on their subject, the age of their pupils, the country they are in. It will give teachers a much better idea of whether what has worked elsewhere can also work for them in their particular context.