Education Endowment Foundation:EEF Blog: Evidence in the absence of impact – lessons learnt from applying neuroscience in the classroom

EEF Blog: Evidence in the absence of impact – lessons learnt from applying neuroscience in the classroom

Author
Jonathan Kay
Jonathan Kay
Head of Evidence Synthesis
Blog •3 minutes •

Can an approach simultaneously be effective and ineffective? The EEF’s Research and Publications Manager, Jonathan Kay, looks below the headline result from one of our latest trials to report, Sci-napse’…

Every report that we publish communicates a headline impact figure – but behind this figure there are often interesting lessons to be learned. This reflects one of the key challenges we face in communicating evidence. Often the question of what works?” is better understood as a series of related questions about where, when and for whom approaches work

A good example of an important message being hidden by a headline figure comes from the independent evaluation of our trial of Sci-napse: Engaging the Brain’s Reward System, co-funded with Wellcome, the independent evaluation of which we’ve published today

Sci-napse uses a quizzing tool to try and improve outcomes. At first glance, the result of this trial does not seem interesting: there is no evidence of an impact overall. Indeed, the schools that were trained in the programme made slightly less progress than the control-group schools that were not

Yet Sci-napse is based on a fairly well-evidenced neuroscience. In particular, the brain’s response to rewards has been shown to have an impact on memory formation. A systematic review conducted for the EEF on education and neuroscience highlighted this link, noting that: 

How, then, to square this evidence of promise with the absence of an impact in our trial of Sci-napse?

The process evaluation in the report offers some insights. Of the schools in the intervention group, only 54% of the teachers in the test-based classes actually met the minimum requirements of the intervention – using quiz questions in some classes. When only the classes that met the minimum standards are considered, the result is far more positive, with pupils making +3 months’ more progress than classes that were not offered the intervention

So why does the EEF include pupils who did not receive the intervention in the impact measure we report? This is because all of the primary outcome measures in our trials are reported on the basis of Intention-to-treat analysis”, meaning all pupils who were initially part of the trial are included in the final impact estimate

Why do we do this? Because we want to know how likely it is a programme would work in the real world of busy classrooms. There is likely little point in a school investing scarce time and money in a new programme if too few teachers put it into practice, as the overall effect on pupil outcomes is unlikely to make it worthwhile

Of course, it may still be useful to know if a programme can have a positive impact on the outcomes of the subset of pupils who receive it, as this may indicate it has the potential to make a difference if the programme developer can ensure it is more successfully implemented next time. But even then, we need to recognise that our sample of teachers is no longer random, and therefore more likely to be biased, because the teachers who implement things well may be better teachers in lots of ways – it might be this (rather than the programme, or quality of delivery) that explains the positive impact on their pupils. In the case of Sci-napse, the low levels of implementation also mean that the numbers involved in this analysis are a lot smaller than the overall estimate – meaning that it is hard to draw a reliable conclusion from the seemingly positive impact

Knowing whether schools can and do implement a programme is also an important component of assessing whether its approach is promising – particularly with neuroscientific approaches, which often have a good basis in science but have not yet been extensively tested in the classroom

Programme developers need to work hard to make sure that interventions can be implemented well in most classrooms. In this trial, for example, schools had some difficulty using the quizzing software and struggled to add their own questions to quizzes. A good point of contrast is Dialogic Teaching – a programme that is well-grounded in theory and has been developed extensively to make it an effective classroom practice, with positive impact on pupil outcomes according to its EEF trial

It is also important for teachers and school leaders to consider implementation at every stage of the school improvement process. Nothing will work everywhere, but a careful consideration of implementation can give programmes the best chance of success