Improving outcomes for disadvantaged 16-18 year-old students (Open)

Application round open. Closing date: 3 October 2016 (5pm) 

The EEF and J.P. Morgan are seeking proposals from colleges and schools, training providers, local authorities, charities, networks, research institutions, universities, employers or other organisations who are interested in improving attainment outcomes and employment prospects of disadvantaged 16-18 year old students. This is a call for proposals as part of a £5 million, three-year funding programme that will support work in colleges, sixth form centres, schools and other providers across England. Our focus will be on supporting those students who have not yet achieved a C grade or above at GCSE English and/or maths to do so and/or to improve their Level 2 functional skills. 

To apply, please log in or register on the application system and select the Improving outcomes for disadvantaged 16-18 year old students round. Please first read the Application Guidance Notes for this round. 

Building on the literature review undertaken for the EEF into the existing evidence about what works for improving outcomes, this funding round with J.P. Morgan aims to add to the evidence base by funding and evaluating promising interventions, programmes and approaches designed to raise the attainment of this group of students.

  1. Updated: 15th July, 2016

    16-18 Literature Review

Successful proposals will:

  • focus on improving learning and future employment outcomes for these students, and have some promising evidence of positive impact on their attainment;
  • be funded to test the intervention across a number of settings in England that the applicant has not previously worked with;
  • be evaluated by an independent evaluation team; and
  • have the potential to be scaled-up further if shown to be effective and cost-effective.

We particularly welcome applications from partnerships involving, for example, both education providers and employers. 

Frequently asked questions for 16-18 funding round

Please see below for answers to frequently asked questions about this round. 

Does an application have to improve on both maths and English?

No, we are interested in interventions focused on just one subject, or on general approaches that aim to improve both outcomes. The exact outcome measured will vary from intervention to intervention, but will typically be, for example, outcomes in GCSE resits.

Why do applications have to work with “new settings”?

We expect most of the projects to have their impact evaluated through randomised controlled trials. This means recruiting new teacher/students to the trial, and then randomly allocated to either participate in the project or form the “control group”. This will mostly require that successful grantees will need to identify new colleges and other settings. It may however be appropriate for the applicant can work with existing partners in finalising the intervention.

Is it possible to get an extension to the deadline?

No. But at the initial application stage, we only require a high level overview of the intervention, its evidence base and its delivery model. Details about aspects like budgets and scale will be altered later on in the process, so we do not expect high levels of detail on those questions.

What will randomised controlled trials (RCTs) in post-16 settings look like?

Our independent evaluation teams will work with the successful applicants to design an evaluation appropriate to their intervention. This will, wherever possible, involved using RCTs to estimate the impact that the intervention has on learning outcomes, compared to what would have happened without the intervention. This tends to involve a large number of students/teachers/settings recruited to the trial, then half being randomly allocated to intervene. In our schools work, this is usually at the “school level” (e.g. 80 schools recruited, and half get the programme). In this round, it trials might more often be at the “student level” (i.e. within a college, some students will participate and others will not). We carefully design the projects to ensure that all participants are appropriately compensated for their involvement, if necessary. Please see our Evaluation FAQs for more detail of our evaluation approach.

These students all come from different starting points. How can their outcomes be compared?

The RCT methodology aims to address this. By recruiting from a sample then randomly allocating them into intervention and control groups, you can assume (if there are enough participants) that the variables in the populations are distributed across the two groups. Additionally, our evaluators often account for individual students starting points (e.g. their Key Stage 4 outcomes) when analysing the results. They then calculate an “effect size”, which estimates the size of the difference between the populations.