How much does numeracy really matter for policy-making?
Last week the Royal Statistical Society published a survey of 101 MPs. They were asked “if you toss a coin twice, what is the probability of getting two heads?” Only 52% got it right, and this was an improvement on the 2011 version of the survey where only 40% got the correct answer.[i] If we assume this is a reasonably accurate sample of MPs it seems pretty depressing, even if they do much better than the general public on this question, only 26% of whom get it right.[ii]
There’s no doubt that adult innumeracy is a real issue. According to another survey over half the adult population have the numeracy expected of a primary school child or below. Just 20% are at the level of a GCSE grade 4 (a C in old money). Adults in England perform worse on international comparative studies compared to nearly all Northern European countries (except France and Ireland) and South East Asian countries (Japan does best). But does any of this matter for good policy making?
How Numeracy Matters
There is a general reluctance to accept that maths makes much difference to our work. After all we all have calculators and for the really complex stuff there are usually nerds available to do the detailed work. But a weak understanding of maths causes real problems when we have to start thinking about numbers in ways that aren’t arithmetical but also aren’t intuitive.
For instance, people really struggle with big numbers. They are not instinctive, going well beyond the kind of number set that early humans would have needed to understand. In policy terms this is a particular problem when you’re thinking about public spending, where large differences between numbers get lost because they all sounds big. For instance a recent survey found that people, on average, thought 8% of all UK public spending goes on MPs pay (which would require them each to earn £100 million a year).
Now you can argue that this particular survey was badly designed AND that people are using the question to make the point that they feel MPs are overpaid. Perhaps if respondents were offered a cash prize for accuracy guesses would be closer. But, even if we take these things into account, the error is so huge that there must be some other misunderstanding going on here. I suspect that even if you gave people the actual amount of MP pay and the total amount of public spending they’d still, on average, significantly overestimate the percentage.
There is evidence this directly affects policy judgements. In one study people who place “1 million” halfway along a number line between “1,000” and “1 billion”, are more likely to misunderstand policy outcomes. I think you can see this in, for instance, debates about NHS spending. Last September there was a brief flurry of negative press, including a Telegraph front page with outraged comments from MPs, about the NHS wasting money by recruiting 42 new managers on up to £270k. You could argue on principle that these salaries are too high but in terms of NHS spending it is a rounding error (or 0.006% of last year’s budget).
Again there is obviously a political motivation here, it’s an opportunity to bash the NHS. But there is also a belief that the money represents meaningful “waste” when it would in fact fund the NHS for around twenty minutes.[iii] This in turn has real world impact as successive government have gone on drives to reduce NHS management costs, which has likely had a net negative impact on efficiency.
Alongside the “big number” problem, the most common formal numeracy issue is probably risk assessment. We know people with lower numeracy skills make worse decisions about their own health and finances. The general public making poor decisions does, of course, have policy implications (e.g. people not realising they’re not saving enough for their retirement). And there’s no reason to think this wouldn’t also apply to policy decisions made by less numerate Ministers where they have to make assessments of the balance of risks.
Then there is the problem of statistics readily accepted as true because people do not have the ability, or confidence, to interrogate them. For instance, at lots of education events at the moment I hear the stat that, in the last OECED PISA study, the UK came 68th out of 71 countries on children’s self-reported life satisfaction, which is then used to justify all sorts of policy prescriptions. But if you interrogate the data it’s pretty obvious that there is a cultural interpretation issue here as the highest ranked countries are all clustered in Eastern Europe and the Balkans. Do I really believe life satisfaction is vastly higher in Albania than England? I’d certainly treat the claim with some scepticism but most people just seem to accept it as a concrete fact.
Stats shorn of context and caveats, becoming widely accepted at true, is a widespread problem. I wrote about this in the context of Covid modelling where we have repeatedly seen misunderstanding leading to both fear-mongering and complacency. But it applies to all policy areas. I get particularly exercised about the way economic projections are used to justify financial decisions because so few politicians understand how shaky those projections are or what underlying variables would materially change them.
The polarisation problem
Given that low numeracy does cause problems with big numbers; probability and credulity around data, it would seem logical to argue that, ultimately, we would have better policy with more numerate politicians. But there is a crucial caveat.
The problem is that formal numeracy can be overpowered by “emotional innumeracy”. The American academic Dan Kahan ran a study asking people to interpret the same set of data about a trial of skin cream and the impact of gun control measures. More numerate respondents were, as expected, more accurate in answering questions about the cream but below average when asked about the gun control measures. Kahan argues that “more numerate subjects use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlook.” An obvious real world example of this has been the endless online covid rows. People who are relatively comfortable with numbers on both sides have been more able to cherry-pick studies and models to reinforce their political beliefs.
This complex relationship between political beliefs and maths makes it difficult to be certain that higher levels of numeracy would lead to a better political class or a more informed public. I suspect more numerate politicians (and a more numerate public) would lead to better policy in systems with low polarisation but not in those with high polarisation.
This might explain an apparent contradiction in the relationship between national numeracy levels and good government. The European countries with the highest levels of numeracy (Finland, Sweden, Netherlands) do have better outcomes in most policy areas. But Hungary and Russia also perform above average, and above England, and I definitely would not prefer their governments.
Perhaps relatedly, Bobby Duffy, in his book “the Perils of Perception”, shows how the strength of misperceptions about policy issues in the general population correlates with levels of education, but even more strongly with the level of unjustified confidence people have in their answers. The countries where people made the best guesses (e.g. Sweden again) had the lowest confidence in the accuracy of their answers.
All of this suggests numeracy is a secondary concern, when trying to improve policy, to polarisation. It may even make it worse. In a context where the incentive for politicians is to find points of agreement and persuade opponents of the genuine merits of their case, higher numeracy would be a positive. It would enable them to do this while avoiding common reasoning errors. In a context where the incentive for politicians is primarily to display loyalty to an ideology and/or demographic group then higher numeracy could make them more dangerous as they will be able to use it to further strengthen the convictions of their tribe.
If you’re new to the site and enjoyed this piece you can sign up here to recieve future pieces by email.
[i] Notably 68% of pre-2010 MPs got it right versus 38% of 2019s. I wonder if a few remembered the right answer from last time.
[ii] A majority of people pick 50%. This is also the most common error for MPs. I *think* this happens because people are either guessing or thinking about the probability of getting a second head *after* throwing a first head, which is 50%.
[iii] Total NHS spending was around £200 billion last year. There is probably also an element of scope neglect going on here. People don’t get more angry or concerned about things in linear proportion. They’ll be as cross about a £4k duck house as £200k of spending on a second home. Or, in the commonly given example of scope neglect, will offer to donate the same about of money to save 2,000 birds as 200,000.