The pandemic has highlighted the use of quantitative evidence in policy making in a way few of those working the area would ever have imaged. Prior to 2020, if anyone had told modellers and statisticians that there would be nationwide conversations about “R-numbers” or excess deaths, they would not have been believed. Yet there is now a public consciousness that models, data and statistics are driving thinking in the heart of government as maybe never before, and that this evidence has profound effects on daily lives.
As a former Chief Scientific Adviser to the Home Office, I can only delight in the further use of evidence in public policy making, but as with most good things, challenges also arise. Do those in government, both officials and ministers alike, have access to the resources they need to understand the shortcomings and caveats of such evidence? Perhaps even more critically, do those producing the evidence have an understanding of what is really needed to help inform good decision making? Many an insight has been lost between its discovery and its communication to a lay-person.
Good decisions need rounded evidence. When evidence is being assembled on any decision, the most important yet usually most difficult part is to understand the interplay between different pieces of information. During COVID-19, for example, it has often been claimed that there is a dichotomy between health and economic choices, but, in reality, in most cases, the two tightly align. However, with our system of expertise, how many disease modellers really understand the macroeconomic arguments, or economists the intricacies of the epidemiological models? It is at that interface that the decisions critical to the outcomes need to be made. Bringing together the evidence from the physical, biological, mathematical, technological and social sciences becomes vital. To put it another way, quantitative evidence becomes even more important when combined with the qualitative, and this is reflected in the research needs of Government.
However, in many academic disciplines those who really work at understanding the interfaces and synthesising the evidence are seen as “generalists” and not given the recognition they deserve. Maybe the experiences of the pandemic will facilitate a change in mindset, but this will need to be driven by both those with experience of the needs of policy makers and those who determine the systems of recognition. Care needs to be taken though, as experts in one area can feel pushed to answer questions about another where they have no expertise, and an inaccurate answer can either lead to their very valid expertise being discounted, or inaccurate information being treated as indelible truth.
Many experts, particularly academic ones, have now been exposed to policy making and this should yield dividends for years to come. Post-COVID, it is unlikely to be of the same intensity, but if a generation of scientists, in the broadest meaning of the word, realise that their expertise can truly inform decision making, then we should all gain. There is also very much a role for experts to help inform the public, so they understand and evaluate the decisions that are being made. It is not always the case that those informing decision making should also be those directly informing public debate, as without considerable care, the trust in the information by both groups, policymakers and the public, can be compromised by perceived conflicts of interest.
For the future, quantitative evidence is now firmly on an upward trajectory in its influence in policy making, but there is a risk. When the time comes to look back at the consequences of the decisions made during the pandemic, there will be a tendency to rate the outcome on an arbitrary scale. That rating will then be intrinsically linked to the usefulness of “the science”. Decisions had to be made in the face of uncertain evidence, and with hindsight some of those choices would have been different, but we will need to keep reinforcing that decisions made in the face of uncertain evidence are always better than those made with no reference to the evidence at all.
About the author
Professor John Aston
Professor John Aston is Harding Professor of Statistics in Public Life, University of Cambridge. He is based in the Statistical Laboratory, Dept of Pure Maths and Mathematical Statistics and from 2017-2020 was Chief Scientific Adviser to the Home Office. He is an applied statistician who works in areas including medical imaging and official statistics. He was a founding director of the Alan Turing Institute.