Measuring impact: from monitoring to evaluation

One of the significant shifts in education in the UK over the last decade has been an increased need to understand the impact of specific interventions and teaching approaches made in schools on pupil outcomes. 

It is important to recognise the difference between monitoring pupil progress and evaluating impact of specific interventions.

Monitoring pupil progress is the more straightforward of the two tasks. Its principle focus is to understand whether pupils are meeting targets or milestones. It does not rely on judgement in the same way as evaluation; the outcomes tend to be binary, the pupil is either on-target or they are not, although there are clearly grades of how far off target the pupil might be. Monitoring progress through regular assessment points, each half or full term, is a familiar process in most schools.

How is evaluation different?

In contrast, evaluation is an act that leads to a judgment on how effective something was, for example an intervention to raise low levels of literacy in Year 7 pupils. Evaluation tells us about how something happened and requires us to analyse the data we collect and determine if the intervention is worth continuing with. It is always tempting to pin down the cause of an effect: it is a very human thing to do and seeks to help us make sense of the world. But ascribing impact in school can be a tricky activity, and one which, if done well, requires an evaluation.

For example, a school may introduce a new maths support programme, then find that maths attainment has gone up on average, and infer that it is the maths support programme that has caused it. But what if something else caused it? What if the school’s science teachers were reinforcing some of the maths skills the support programme was aimed at improving? Might that have caused the improvement? Or was it the school’s new rewards system which focused specifically on students making excellent progress in their core subjects? Or was it a combination of the two (and more)? Without an evaluation (and a good one at that) saying that X caused Y is not possible. 

This AMPP guide is naturally focused on monitoring. By working hard to improve the quality of assessment in school, teachers and school leaders can prepare the ground for answering this question robustly, by using tools such as the DIY Evaluation Guide.

So a focus on improving assessment and monitoring practices in school is important for ensuring an accurate picture of student attainment and progress is generated, but it also make the process of evaluating impact robustly an easier and more fruitful one.


How will you identify areas of intervention or classroom practice that need a more thorough evaluation?