A study is biased if its impact estimate varies from the real impact. This variation can be linked to weakness in the implementation or design of the evaluation.

For example, bias can be introduced if participants themselves decide whether to join the treatment or control groups. This ability to “self-select” could mean that schools with a particularly proactive head teacher or lots of funding make their way into the treatment group, while schools with less motivated head teachers or less money will end up in the control group. When this happens, differences in the outcomes of the two groups may be due to these pre-existing features (e.g. more money or more proactive head teachers) not the intervention, and the estimate of the effect size will suffer from bias.

There are many other potential sources of bias, including measurement bias, which is avoided by ‘blinding’ test delivery and marking, and attrition, which is discussed above.