Education Endowment Foundation:EEF Blog: What do we really know about ​‘what works’ in classrooms?

EEF Blog: What do we really know about ​‘what works’ in classrooms?

Author
EEF
EEF
Blog •4 minutes •

James Richardson discusses whether Randomised Controlled Trial results can be expected to have the same impact in your school.

An evidence based teaching profession shouldn’t deal in absolutes. Rarely will there be a definitive answer to the question; what works in raising pupil attainment? Considering the EEF’s founding principle is to understand what works to close the attainment gap for poor children, this may seem a counterintuitive way to launch a blog. Education, perhaps more than any other policy arena, is soured by arguments based on false dichotomies; straw men constructed to defend the status quo or win political arguments, often adding little value to the collective endeavour of improving outcomes for all.

This was brought into focus on the weekend before Easter at the ResearchED conference, when Lee Elliot Major and I had enthusiastically agreed to talk about the challenges of implementing evidence based approaches in schools. In the open discussion that followed our talk, and in conversations over lunch, the questions moved the focus in a different direction, back to the first principles of what we mean by evidence and how can we be sure that the results from randomised controlled trials (RCTs) are useful to teachers in different contexts. For many teachers, even those as engaged and motivated as the typical ResearchED attendee, it is critical that we keep revisiting these questions, particularly when trying to navigate our way through polarised education arguments, even the old (and defunct) one on the value of quantitative versus’ qualitative educational research.

In this, the first EEF blog posting, I want to revisit the principles of EEF’s approach to research evidence and ask if we can ever really know what works’ in classrooms.

Constructing straw men

Take three familiar assertions from current education debates:

1. The obsession with data is destroying creativity in the classroom’.

2. Teachers trained through traditional routes are more than effective than those trained through short, intensive graduate schemes.’

3. Research demonstrates that teaching assistants are ineffective at raising pupil attainment’.

We will all have our own opinions on each of these statements, perhaps underpinned by research, but undoubtedly influenced by our own experiences and our ideological lens. I thought of myself as a creative classroom teacher, but when I moved to a more senior role examining school level data, I began to appreciate its value in planning activities and targeting particular pupils. But the debates are often constructed to force you onto one side of the divide, with little room for balance and nuance; you either support data-driven instruction or you want to nurture the creative, exploratory inquisitive nature in children. When it is presented in this way, you cannot be for both.

In the absence of any solid evidence on each statement we are governed by our instinct, and framed by our experiences. Our own judgement is the benchmark. This represents a challenge for the education industry: how do we build an effective education system when the instrument of measurement has no fixed point? The EEF aims to provide one piece of the jigsaw by synthesizing, funding and promoting rigorous research that will provide a foundation for an informed debate.

Can we ever know what works’?

There is a great deal of emphasis placed on the phrase what works’, and perhaps those of us who are involved in designated What Works centres use the term too glibly. It suggests that if only our research design is robust enough, we can discover the teaching methods or interventions that will close the achievement gap. But What works’ is really shorthand for what has worked in the past and gives us the best indication of what is likely to work in your school, with your particular cohort of pupils.’ Even with the emphasis the EEF places on running RCTs, a design that ensures that there are no systematic differences between control and intervention groups, we present our evaluation reports as what worked’. RCTs are a blunt instrument, giving us an indication of what works for whom under what circumstances’. How they work and how the impact may be moderated by particular contexts requires more detailed design and analysis.

A recent discussion with a friend who works in the Centre for Evaluation at the London School of Hygiene and Tropical Medicine, highlighted the years of critical self-analysis that trialists in medicine and public health have been through to reach the point now where the evidence base is robust enough to be able to understand how interventions are moderated by context. A paper by Chris Bonnell and colleagues in 2012, urged evaluators to recognise the importance of understanding causal mechanisms in RCTs and how far those findings can be applied in different settings. That does not mean we reject RCTs and return to small scale studies that can never be replicated, but it does mean we have to understand that the overall effect size is contingent on circumstance and effective implementation.

The second statement on teacher training routes frames this nicely. It tends to provoke fierce loyalty to one route of teacher training or another, often based on our own involvement, on anecdotes from teachers we know or from snippets of research headlines. There is a great deal of robust experimental and quasi-experimental evidence from the United States on the impact of different teacher training routes, but there are very few studies that declare with confidence that the findings can be generalised to different populations in different settings.

The fragility of evidence

Those implementing evidence based programmes spend a lot of time discussing the concept of fidelity to the model’. Some programmes may be fragile and require close attention to the specification, some are more robust and therefore more adaptable. The lesson for schools is to understand the active ingredients’ of the programme and pay careful attention to how it is implemented when adapting to fit the context of the school and the individual pupils.

The third statement above provides a good case in point. We know from the recent EEF trials – Switch On and Catch Up – that using teaching assistants effectively have common elements: the need to train them in specific interventions and deploy them with targeted pupils. Ignoring these active ingredients will most likely reduce the impact of the intervention and the teaching assistant’s value.

Balancing fidelity of the interventions with the nuance of context is the great challenge of implementing evidenced based approaches in schools and it is why there is little time for an absolutist approach to education debates.

James Richardson is a senior analyst at the EEF. James joined the EEF in September 2013 after ten years as a teacher, Head of Faculty and Assistant Headteacher. From 2009 – 2013 he was the Senior Researcher for the MNS Foundation in Philadelphia, USA.