A central aim of the Education Endowment Foundation (EEF) is to improve knowledge and extend the evidence-base on what works to raise the attainment of disadvantaged pupils in schools in England. To achieve this, all EEF projects will be rigorously evaluated by independent experts in educational research according to minimum standards. These evaluations will be funded by the EEF.

The impact of projects on attainment will be evaluated, where possible, using randomised controlled trials, with a linked process evaluation to understand the elements of successful delivery. Evaluations will be conducted by one of the EEF's independent panel of evaluators. The EEF takes a cumulative approach to evaluation whereby the size of evaluation, and therefore the number of schools or projects that we would require grantees to work with, will be determined by what we already know or whether there is a need to either pilot a new approach or demonstrate that an intervention can work at scale. In order to build capacity and maintain quality the EEF is developing resources for evaluators on aspects methodology and implementation.

We want to share our research about what works to raise the educational attainment of disadvantaged pupils. As projects progress, we will integrate the results of all of our evaluations into the summary of evidence for practitioners in the Teaching and Learning Toolkit. Sitting alongside the Toolkit and large-scale evaluations conducted by the EEF is the DIY Evaluation Guide, a resource for teachers and schools. It provides advice and guidance for teachers who want to improve the way they design and carry out small-scale evaluations of new strategies in their own schools.

Our website will feature evaluation reports and examples of approaches that work. We want to make our approach to evaluation rigorous and transparent. All our research can be accessed through the document library.

Classifying the security of findings from EEF evaluations

The EEF has developed a classification and accompanying procedure for judging the security of findings from EEF evaluations. The primary purpose of this system is to communicate to practitioners how much weight they should place on any particular finding or the amount that could be staked, based on that single evaluation, on finding the same result in the same or a similar context again. The ratings have been designed specifically to differentiate between EEF evaluations, most of which are set up as randomised controlled trials.

The classification system has been developed in consultation with members of the EEF Evaluation Advisory Board and Panel of Evaluators. The first version of the ratings were published in January 2014. The system then underwent a period of consultation and the revised version published in May 2014 alongside the consultation response document.

You can find the Classification system (May 2014) here.

You can find the Consultation on the classification system (May 2014) here.

You can find the original Classification system (January 2014) here.

The EEF welcomes feedback or comments on the security system. Please send them to camilla.nevill@eefoundation.org.uk