The SEND Review was an 11-month programme designed to help secondary schools strengthen their support for pupils with special educational needs and disabilities (SEND). Delivered by nasen (the National Association for Special Educational Needs), the programme offers training and coaching to SENDCos, helping them lead whole-school improvements in inclusive practice.
The goal? Better outcomes for pupils with SEND – not just in their academic attainment, but also in their well-being and attendance. We commissioned an evaluation, led by Manchester Metropolitan University, to find out what impact the programme had on these outcomes.
What did the evaluation show?
The key measure we looked at was GCSE English Language performance for pupils with SEND. Findings from the evaluation suggest that, on average, pupils with SEND in schools using the SEND Review made one month’s additional progress compared to their peers in other schools.
But there’s a catch – what does the low security rating mean?
The findings from the impact evaluation come with a very low security rating. This means that we cannot draw conclusions about the impact of the programme from this evaluation.
The very low security rating is because there was a high amount of missing data in the outcome measure. The evaluation relied on long-term pupil outcomes, and challenges related to data collection – including the effects of the pandemic – meant the final dataset was incomplete.
So, what can we take from this?
Even though we can’t draw conclusions about impact, the evaluation still offers valuable insights.
SENDCos reported that the programme helped them reflect on and improve their practice. It supported stronger strategic leadership of SEND, raised the profile of SEND across the school, and contributed to a more collaborative culture – one where leadership and responsibility for SEND provision were more widely shared. There was also evidence of schools using data more effectively and engaging meaningfully with peer review to guide change.
All participating schools engaged with the programme, and feedback suggests it was well-received and feasible to implement.
What’s next?
A more robust analysis is on the way. Later this year, evaluators will use the National Pupil Database to conduct further analysis. This should fill in some of the gaps and offer a more secure picture of the programme’s impact. Results are expected in Summer 2026.