Scientists need the guts to say: I don’t know

A popular view of scientists is that they deal with certainties, but they are (or should be) the first to admit the limitations in what they know. Yet can scientists admit uncertainty and still be trusted by politicians and the public? Or would the language of possibilities and probabilities merely shift attention to those with more strident, confident arguments?

Nobody is expected to predict the future exactly. So there is generally no problem in acknowledging the risk of everyday activities, and it is natural to use past experience to be open and precise about the uncertainties. Patients may, for example, be told that for every one million operations there are expected to be five deaths related to the anaesthetic — that’s an anaesthetic risk of five micromorts (a one-in-a-million chance of dying) per operation. This is roughly equivalent to the risk of riding 30 miles on a motorbike, driving 1,000 miles in a car, going on one scuba-dive, living four hours as a heroin user or serving four hours in the UK Army in Afghanistan.

In more complicated situations, scientists build mathematical models that are supposed to mimic what we understand about the world. Models are used for guiding action on swine flu, predicting climate change and assessing whether medical treatments should be provided by the NHS.

Statisticians such as me try to use past data to express reasonable uncertainty about the quantities — parameters — used in models and, in some cases, doubts about their structure. Take a wonderfully trivial example: last month it was reported that a shopper had bought a box of six eggs, and all had double-yolks — a “one-in-a-trillion chance”. This calculation was explained on Today by a man from the Egg Council who said that, as one in 1,000 eggs were double-yolkers, the chance of all six being double-yolkers was one in 1,000 x 1,000 x 1,000 x 1,000 x 1,000 x 1,000.

This aroused my suspicion. To begin with, this is not a trillion and, if this were the true chance, we would expect to wait 500 million years before this event occurred. So what has gone wrong with this model? It turns out that the egg-world may not be so simple. Double-yolkers are far more common in certain flocks, so there should be uncertainty about the “one in 1,000” parameter. Eggs in a box tend to come from the same source so once one double-yolker is found the chances increase that the rest will match. As a result there is uncertainty about the structure of the model too.

Acknowledgement of parameter and structural uncertainty has become common in climate and other models. But there is a further level of uncertainty: of unforeseen surprises, Black Swans and Rumsfeldian unknown unknowns. There should always be a suspicion that there’s more going on than we can express in mathematics. Indeed, at Waitrose I bought a box marked “double-yolked eggs” for £2.49. Certainly not a one-in-a-trillion chance: double-yolked eggs can be common and can be detected, selected and packed at will.

The moral of “egg-gate” is, as the statistician George Box said: “All models are wrong, but some are useful.” They are not the truth — but are more like guidebooks, helpful but possibly flawed because of what we don’t know. Owning up to such ignorance is finally getting its due attention, although back in 1937 John Maynard Keynes, when talking about predictions for 1970, wrote “there is no scientific basis on which to form any calculable probability whatsoever. We simply do not know.”

So what are scientists to do when they aren’t certain and there is a lot they don’t know? There are ways of showing a little doubt. The Monetary Policy Committee of the Bank of England makes projections for inflation and change in GDP, providing a nice visual spread of possibilities as a “fan chart” but reserving a 10 per cent chance for going outside that range, a huge white void on the chart where anything might happen. And it did: the projections made in 2007 were wildly wrong. Maybe this unmapped region should be labelled “here be dragons”.

Another approach is to be “better safe than sorry”. In July 2009 the Department of Health made a “worst-case scenario” planning assumption of 65,000 swine flu deaths by assuming every unknown quantity was at its worst possible value. There have been 457 deaths so far. Such a super-precautionary approach can be expensive, does little for scientific reputation and may damage the response to a really serious pandemic.

In 2007 the Intergovernmental Panel on Climate Change said it had “very high confidence” that man has caused global warming, which it interpreted as having at least a nine out of ten chance of being correct. It therefore must feel it has about a one in ten chance of being wrong. This seems a fair and open judgment, but has been generally ignored in the increasingly polarised arguments.

It would be nice to think that scientists could be upfront about uncertainty and not feel they have to put everything into precise numbers. It would still be possible for robust decisions to be made.

Acknowledgement of uncertainty may even increase public confidence in pronouncements. Recent events, whether the justification for the Iraq War or “Climategate”, have reinforced the fact that trust is the crucial factor — although this may be even more difficult to achieve than certainty.

David Spiegelhalter, Winton Professor of the Public Understanding of Risk at the University of Cambridge.