Education Endowment Foundation:EEF Blog:​Magic Breakfast – a case study in scaling evidence for impact

EEF Blog:​Magic Breakfast – a case study in scaling evidence for impact

Author
EEF
EEF
Blog •4 minutes •

The Department for Education has this week announced it’s awarding £26 million to Magic Breakfast and Family Action to run morning clubs in over 1,770 schools across the country, focusing on the most disadvantaged areas. In this blog, we look at the EEF’s (continuing) role in this initiative and how it sits in our wider drive to improve how we support teachers and school leaders to scale evidence for impact…

The problem of scale

How do we scale evidence for impact, ensuring that children and young people from the most disadvantaged backgrounds get the maximum possible benefit from what we know about what works’ in education?

It’s a key question for us at the EEF, given our explicit mission is to support teachers and senior leaders to improve outcomes in order to close the attainment gap.

Scaling evidence is the hardest challenge. What works’ often turns out not to work as it gets bigger.

We know how to summarise the existing what works’ evidence, notably through our popular Teaching and Learning Toolkit and its Early Years companion.

We know how to generate evidence – we’ve so far funded over 160 projects, all independently and robustly evaluated, many with initially promising results.

But scaling evidence is the hardest challenge. What works’, we know from countless examples in this country and beyond, often turns out not to work as it gets bigger. The programme developers become more distant, affecting the quality of delivery; schools make changes to key components, with unintended consequences.

Successful interventions can be quite fragile: stretched too far, they too often break. Programmes which not only raise attainment but are also capable of growing while sustaining that benefit are rare. Which is why, when we find them, we’re mustard keen to do all we can to scale that impact.

Magic Breakfast: our trial

A little over a year ago, we published the independent evaluation of EEF-funded project, Magic Breakfast. This found that its model of a free, universal, before-school breakfast club delivered an average of +2 months’ additional progress for pupils in reading, writing and maths.

Importantly for the EEF, the evaluation of Magic Breakfast was what’s termed an effectiveness trial. This means we were testing a scalable model of the intervention under everyday conditions in a large number of schools.

This is a tough test to set. Small-scale trials of targeted interventions with intensive developer involvement are much more likely to yield positive headline impact figures – but their practical use to most schools is limited by their reach. So Magic Breakfast’s achievement was especially notable.

Magic Breakfast: what next?

Magic Breakfast is the first project successfully to reach the end of the EEF’s evidence generation pipeline. But our involvement hasn’t stopped.

For starters, it was included on our list of Promising Projects’, those programmes which we have trialled with positive outcomes and which we encourage schools to consider.

We also began working with our colleagues at Impetus-PEF, one of our founding partners, to support Magic Breakfast. This has included providing a grant to help develop their model and a business plan and supporting their bid (jointly with Family Action) to deliver against the Department for Education’s tender to expand breakfast club provision to support disadvantaged pupils.

Today’s government announcement means that they will receive up to £26 million to deliver morning clubs to over 1,770 schools across the country, focusing particularly on disadvantaged areas.

We don’t believe in silver bullets. Rather, we want to trial different approaches as we develop the evidence of how best to scale evidence for impact.

The EEF will continue to provide support by appointing independent evaluators to assess new aspects of the clubs (such as how best to involve parents and carers and how to change the culture of breakfast in secondary schools), as well as focusing on the fidelity of the expanded service to the evaluated model and reviewing the reach of the initiative and the schools being served.

Scaling evidence for impact: our approach

To date, we’ve committed £17m towards scale up: 25 projects will involve over 3,000 schools and early years settings and reach more than 204,000 children and young people.

As more EEF-funded projects reach the end of our pipeline, we’ll be taking an increasing role in ensuring that evidence-informed approaches and programmes are available to teachers and senior leaders across the country.

Helping successful individual projects to grow is, though, just one of the routes to scaling evidence for impact that the EEF is testing. In addition, we are:

  • co-funding a 23-strong Research Schools Network which aims to lead the way in the use of evidence-based practice and bring research closer to schools;
  • funding major campaigns to promote effective use of evidence – making best use of teaching assistants in Yorkshire and Lincolnshire; and improving primary-age literacy in north-east England;
  • advising both schools and the Department for Education on its £75m Teaching and Leadership Innovation Fund (TLIF) and £140m Strategic School Improvement Fund (SSIF) to help ensure these funds are spent in ways that are most likely to make a difference to pupil outcomes;
  • publishing EEF guidance reports on key issues like literacy and maths, summarising the best available evidence on key aspects of teaching, providing teachers and senior leaders with practical recommendations for everyday use;
  • supporting the Suffolk Challenge Fund, providing matched-funding to encourage take-up of the EEF’s Promising Projects.

As this list suggests, we don’t believe there is a silver bullet in scaling up what works’ evidence to boost young people’s outcomes. Rather, we want to trial different approaches, learning from the independent evaluations we’ve commissioned, and adapting as we develop the evidence of how best to scale evidence for impact.