### Analysis & Interpretation

Once you have completed your intervention and testing you should put all of your data into an Excel spreadsheet with columns for the post-test data and a row for each pupil, then calculate an ‘effect size’ for your intervention.

### What does an effect size mean in my classroom?

Effect sizes can be approximately translated into additional months’ progress you might expect pupils to make as a result of a particular approach being used in school, taking average pupil progress over a year is as a benchmark. The progress that an average pupil in a year group of 100 students makes over a year is equivalent to them moving up from 50th place to 16th place, if all the other students had not made any progress.

The conversion we have used corresponds to progress in Key Stage 1. Of course, a typical month’s progress at primary school is greater than at the end of secondary school and so as a result this conversion may understate some impacts at secondary level. However, the conversion still gives an indication as to the relative effectiveness of interventions (e.g. it will always show which of two interventions are more effective, even if the months progress conversion is slightly conservative).

As well as calculating the average effect on attainment, you might also want to consider seeing if there are any differences in the effect for different subgroups, such as boys and girls.

Only the post-test results are used in determining the effect size of an intervention, as we are interested in the difference created by the intervention.

#### Interpreting results

If you use random allocation and have implemented your evaluation exactly as planned the only difference between the control and treatment groups should be the intervention. Unfortunately, however, this is not always the case and there may be a number of reasons why differences may have occurred. When interpreting your results you will need to consider the other factors that may have brought about the change (or lack of change) that you are seeing. When interpreting your results you should consider that the effect might be due to:

• The intervention or approach that you are testing: the effect on attainment may be a direct result of the intervention you are testing.
• Systematic differences between the groups: if you are not using random allocation there might be systematic differences between your groups that have brought about the effect. For example, one group of children might be taught by a different, better, teacher, or be in a different school where they are implementing additional interventions which might affect your results.
• Problems with your evaluation methods: there are a number of factors regarding your evaluation that might affect your results. You should think about all the steps above, and in particular whether there were any differences in the timing or delivery of your pre- and post-testing that might affect the results. For example, the intervention group test might be done at a different time of day or when more pupils were absent from school.

The original DIY Evaluation Guide was produced by Stuart Kime and Professor Rob Coe of Durham University for the Education Endowment Foundation. (The PDF document can be accessed in DIY Resources on the righthand side of the page.)

Stuart taught English in secondary schools for ten years before starting a full-time PhD in Education in 2011. Stuart’s PhD focuses on the use of student evaluations of teaching in secondary schools. He is now Director of Evidencebased.education.

Rob is Director of the Centre for Evaluation and Monitoring at Durham University, which is the largest educational research centre in a UK university. Prior to beginning an academic career Rob taught Mathematics in secondary schools and colleges.

The interactive version of the DIY Guide has been developed by the EEF in collaboration with Evidencebased.Education and Durham University.