Striking a balance

Danielle Mason and Jonathan Kay introduce EEF commentaries and explain how they can help schools make best use our evaluation reports. 

Today, for the first time, the findings from each of our newly published projects will be accompanied on our website by a short commentary. Our goal in introducing these commentaries is to let teachers and school leaders know more about the implications of our report findings.

This is something of a departure for us. To date we have taken the view that the results of our independent evaluations speak for themselves. We strive to make our findings as accessible as possible. We ask our evaluators to describe impact in terms of months’ progress. We have adopted a simple five-point ‘padlock’ scale to describe the quality of the evidence. And our key conclusions for each project always cover implementation as well as impact.

Further questions

But sometimes the findings of an individual report, however well presented, raise further questions. Take today’s report on Parent Academy. Our trial did not provide any evidence that this twelve week course for parents of primary pupils had any impact on attainment. How should teachers interpret this? On the face of it, it might suggest that schools should give up on parental engagement activity. But this is not the case: this single study does not negate the existing evidence on parental engagement overall. Indeed, we have another report out today which shows that texting parents information about tests and homework can have a positive impact on attainment. And we’re also publishing some promising – but not conclusive – results for a third programme called SPOKES, which helps parents of younger children to support their reading. So how can we ensure that teachers have access to this contextual evidence when they click on just one of our reports?

That’s exactly what we hope to achieve with the new commentaries. Of course, there are risks involved. Describing the context requires us to go beyond the findings of an individual report and draw on other evidence. This means we need to make judgements; judgements about which other evidence is relevant and which is not, judgements about the extent to which we can generalise from particular findings, and judgements about how to do this effectively in a few short paragraphs.

Informing change

So why do it? Why not carry on presenting our high quality, independent evaluations without interpretation, leaving schools to use them as they see fit, and avoiding any accusations that we have ‘sexed up’ reports on after-school chess clubs?

To answer this question brings us to the heart of what we are trying to achieve. For the EEF, generating the evidence is only half the job. We need to ensure that evidence informs change. It is not enough for us to present our findings and expect schools to do the rest. We know that teachers and school leaders won’t always have time to study all the evidence in detail. So if the contextual evidence is important for teachers considering a particular project, we need to get it to them.

Striking a balance

It’s a difficult balance to strike. When we interpret findings, even in the context of other high quality evidence, we can never be certain our interpretation is correct and there is always the chance that new results will overturn what we think we know now. But that doesn’t mean that the current evidence tells us nothing. Every new project gives us robust new information which improves our understanding.

Ultimately, we believe that if a new finding has implications for decision making - and if we have enough faith in those implications to let them inform our own grant-making and our campaign work – then it would be remiss of us to keep it to ourselves. In the absence of clear cut, definitive evidence, it is surely better that schools consider this type of information, rather than take no account of the evidence at all – or are paralysed by in-actionable and confusing findings that never actually make it into the classroom.