This post was originally published on Innovation Growth Lab’s blog on 28th June 2018 by the EEF’s Triin Edovald.
Evidence-based policy making has now become central to the scientific agenda. The amount of rigorous evidence is increasing in all fields but the question of how to best apply this evidence to policy making processes remains a challenge. Particularly, since the evidence comes from a range of contexts, it makes it harder to predict whether a policy will have the same impact in one context as it did elsewhere. Furthermore, there are also implications for how the evidence from another context influence the design and implementation of policies.
How to apply evidence in different context was a subject we explored at IGL2018 from 12 – 14 June in Boston, US. Namely, a team of J‑PAL Senior Policy Associates and Managers – Lisa Corsetto, Alison Fahey, Ariella Park, and Claire Walsh – ran a workshop on how to use the generalisability framework for integrating different types of evidence, including results from randomised controlled trials (RCTs) developed by Rachel Glennerster (DFID, UK) and Mary Anne Bates (J‑PAL North America) to specifically address the generalisability puzzle.
The whole idea behind this practical generalisability framework is that it allows policy makers to decide whether a particular policy makes sense in their context. The framework simply breaks down the question “will this programme work here?” into a series of questions based on the theory behind a programme. In assessing the different steps in the framework, different types of evidence can be used.
The workshop offered myself and other participants an excellent opportunity to use the framework on a case study that closely resembled some real-world examples of policy dilemmas. The case study focused on a fictional South Asian country that has set out to encourage the growth of micro, small, and medium enterprises while using the evidence on potential programmes from rural Uganda and urban India. The J‑PAL facilitators supported us in applying the key steps in the framework by critically appraising the following questions:
- Step 1: What is the disaggregated theory behind the programme?
- Step 2: Do the local conditions hold for that theory to apply?
- Step 3: How strong is the evidence for the required general behavioural change?
- Step 4: What is the evidence that the implementation process can be carried out well?
Even though a simple framework from the outset that addresses the questions of external validity and policy adaptation, the workshop certainly taught me that it’s not necessarily that straightforward to apply it. This is partially because of an interaction between a programme’s or policy’s theory of change and the context in which it is being implemented, which may lead to failures of external validity. Applying such framework requires not only strong contextual assumptions but excellent knowledge of the local context. What complicates things further is the fact that knowledge of a dimension of the context in which the policy or a programme is being implemented may not be compatible with the strength and relevance of evidence from the other contexts. Practicing the generalisability framework also taught me that this can be a resource consuming exercise when applied in a real-life setting.
Having said that, the undeniable benefit of this practical framework is that it allows policy makers (but also researchers) to draw on much wider range of evidence than they might otherwise use. Policy makers should be open to a view that a study can inform policy in more locations than the one in which it was undertaken and equally draw on evidence that goes beyond a specific location. Furthermore, it is also important to be realistic about how many RCTs can precede scale-up activities as it wouldn’t be possible to test every policy in every country in the world. There is also a question of replication studies: how many times should a study be replicated in different contexts before this evidence can be relied on? Again, focusing heavily on replication studies could turn out to be a narrow-minded approach to the evidence that disregards potentially relevant information. Instead, policy makers should draw their attention on mechanisms and using available evidence to decide whether these mechanisms are likely to apply in a new setting