Education Endowment Foundation:EEF Blog: Incentives and education – what can we learn from trials in schools?

EEF Blog: Incentives and education – what can we learn from trials in schools?

Author
EEF
EEF
Blog •7 minutes •

Stephen Tall on a lecture for the EEF by Professor John List, co-author of The Why Axis, and its implications for our work.

Hands up who has read Freakonomics?” From where I was sitting close to half the audience raised their hands, so the man who posed the question – John List, professor of economics at the University of Chicago, co-author of The Why Axis, and hailed by his colleague Steven D. Levitt as a true trailblazer” – knew he would have a receptive audience.

Professor List’s EEF-hosted lecture (17 October 2014) offered a bold promise in its title: Using Field Experiments To Revolutionize Education’. Actually, I shouldn’t call it a lecture: he said he would run it like he would one of his Chicago seminars, accepting questions from the floor whenever someone wanted to challenge him, and he stuck to his word.

This meant that what those of us in the audience got was a straight-talking conversation with one of the foremost behavioural economists in the world, unafraid to poke the wasps’ nest of what can be learned from trials in schools testing controversial issues like performance pay for teachers and cash incentives for pupils.

Making what works’ the norm, not the exception

Introducing him, Dr David Halpern – chief executive of the Behavioural Insights Team (better known as the nudge unit’) and national adviser to the Government’s What Works network – said he is sometimes asked, Why do trials?” He prefers to turn the question on its head: Why wouldn’t you do trials?” After all, he added, whenever he asks policy-makers the deceptively simple question, What do you not know the answer to?” there is never a shortage of responses. Routine, small-scale trials are, Dr Halpern argued, the best way of finding out what’s most likely to work – and, in an era of continuing austerity, the presumption should be that those responsible for spending public money should be able to point to evidence demonstrating its efficacy.

The education production function’ and why it matters

It was a point which underpinned Professor List’s premise. Conventional wisdom among US policymakers, he noted, has been that we need teachers with higher degrees, we need smaller class sizes, and we need to invest more in order to make public education work. And that’s exactly what’s happened over the last few decades: the proportion of teachers with a Master’s degree or higher has trebled since 1961, the student:teacher ratio has dropped from 22:1 (1970) to 16:1 (2005), and spending per pupil has more than doubled in the past four decades. Yet the proportion of 17 year-olds graduating has stayed more or less static over the same time period. So what can be done?

The answer, he argued, lies in maximising what he termed the education production function’ – that the achievement of students depends on four factors: the child’s inputs, the household’s inputs, the school’s inputs and the relevant prices. In economist-speak, this means we need to know appropriate elasticities, the marginal value of each factor, and the marginal cost of provision. To put it in more everyday terms, the education production function allows us to start answering questions such as Is it more cost-effective to lower the number of students per classroom or to hire better teachers?”

Lessons from Chicago

Professor List then opened his laptop”, as he put it, taking us through findings from a couple of the field experiments (AKA Randomised Controlled Trials) he’s conducted in schools using the behavioural economic principle of loss aversion’ – our human tendency to prefer to avoid risks more than we appreciate equivalent gains:

  • In Chicago, he tested the effects of a rewards programme for teachers based on how their students performed in tests. (He stressed this was done in full consultation with the teaching unions!) One group of teachers was paid a bonus of $4,000 at the start of the year. If their students’ grades improved by the end of the year, they could earn up to an additional $4,000 on top. But if their students’ performance declined, they’d have to re-pay the $4,000 they’d been given up-front. Key finding? The students in these classrooms performed better than those in the control group in which teachers were not incentivised, gaining around an additional three months’ progress.
  • A similar, though much cheaper, trial tested the impact of incentives on high school students’ test results. On the morning of a key test, a group of students was handed a $20 note: if they achieved better than expected they got to keep it; if they didn’t they had to hand it back. Key finding? The achievement of these students was markedly higher when compared with the control group of students who received nothing.

What price motivation?

The particularly interesting point about this latter experiment on student incentives was the timescale. The $20 cash incentive was offered the same day the test took place, which meant there was no time for students to revise better or for longer. The fact their results increased implies they must have been trying harder in the test itself (it was a multiple choice test so their improvement may have been the result simply of completing all the questions to give themselves a better chance). In other words, even though they were sitting what we would have perceived to be a high-stakes test – one which would play a part in determining their future – to many of those students a $20 cash reward was a bigger motivator than the long-term benefits of doing well on the test.

Such findings are challenging for those of us involved in education. Most of us have been inspired by the love of learning and want our love of it to be passed onto the next generation. This is sometimes called intrinsic motivation’, the self-desire to seek out new knowledge. Incentives suggest the active role of extrinsic motivation’, when our motivation is instead activated by outside influences, such as threats or rewards.

Meanwhile, in England…

At the EEF, we recently trialled (through a grant to the University of Bristol) the impact of incentives and loss aversion on the attainment of 10,000 pupils in Year 11 sitting their GCSEs in 63 schools:

  1. In one group, pupils were told they had £80 at the beginning of each half-term and they would lose £10 if they didn’t do well enough in their attendance or behaviour, and £30 if they underperformed on their classwork or homework;
  2. In a second group, pupils were promised a trip or an outing to an event. Each pupil was given eight tickets at the start of each half term. Tickets were taken away for failures to work hard enough on those same four measures (attendance, behaviour, classwork and homework). Pupils needed 12 tickets at the end of a full term to join the trip;
  3. A third group of schools was offered neither set of new incentives, but acted as the control group.

The trial found no significant overall impact on GCSE results from either set of incentives on English, Maths or Science. There was some improvement in classwork, but this did not translate into significantly better results in the three subjects measured. Neither incentive had a significant positive impact on students eligible for free school meals. However, the trip incentives appeared to be more effective for pupils with low prior attainment and their Maths test scores improved by an extra two months.

Piecing together the evidence jigsaw

The EEF’s trial – casting doubt on the utility of incentives in improving student attainment – may at first sight appear to be at odds with Professor List’s findings, which suggest incentives are an effective means of unlocking latent productivity. Both, however, are valid trials. So how can we square this circle and explain the seeming divergence in findings?

  1. The results aren’t necessarily contradictory. In Professor List’s example of students’ test results improving if offered a conditional cash incentive of $20, we saw that student effort increased as a result – in that case leading to higher performance. We saw something similar in Bristol University’s EEF-funded trial: There is a statistically significant improvement in classwork effort across English, Maths and Science for the financial incentive treatment” – but in this case it didn’t lead to an improvement in attainment.
  2. This shows the importance of testing and re-testing ideas in as real world a way as possible. Both trials shows incentives do change students’ behaviour. Professor List’s US trial showed a direct and positive relationship with better attainment. The EEF’s English trial showed a more complex relationship – incentives did improve attainment for some previously low-attaining pupils, but only in Maths. The work of the EEF is dedicated to finding out how best to raise attainment and narrow the gap. When it comes to incentives, it seems to us further work is needed to show how gains in effort will be translated reliably and consistently into gains in attainment (as well as to test any long-term adverse effects). 
  3. Let’s focus our efforts where they can have greatest effect. Designing incentives programmes that deliver real and sustained improvement in students’ attainment require concerted time and energy. Yes, they can work. But so, too, do other approaches (see, for instance, our Teaching and Learning Toolkit). As our chief executive Kevan Collins noted in his blog-post, Cash for grades: Findings from the UK’s largest ever trial on the use of incentives’: The main message seems to be that there are more effective methods for raising the attainment of pupils eligible for free school meals than offering them financial or other incentives. The effort and cost of running incentive schemes seems to run counter to impact that they have.”

Finally: thank you to John List

Finally, a note of appreciation to Professor List on behalf of the 120 or more members of the audience. His was a compelling and thought-provoking lecture, carefully guiding his audience through the use of field experiments and their importance in finding out how we can improve educational opportunities for all. In fact, I can’t think of a better incentive to read his book.

Photos: John Russell