The problem with Covid models
We should no longer let Covid policy be guided by the most pessimistic modelling.
In the general fog of uncertainty about Covid, politicians and advisers have turned again and again to models to try to figure out what might happen – and sometimes to try to scare us into compliance, too. These models will be central to the decision on whether to end Covid restrictions, due to be announced on Monday. Yet comments this week from Chris Hopson, the chief executive of NHS Providers, suggest considerable dissatisfaction with the models and the way they get used.
Hopson, quoted in the Telegraph, noted: ‘For the record, trust leaders are sceptical of the value of predictive statistical models here, given their performance of the past 15 months.’ Hopson has a point. On numerous occasions, claims based on modelling have proven to be overheated.
It’s important to recognise the difficulties with modelling a pandemic, particularly early on. Models rely on three main things. First, you need accurate data to assess the starting position. This includes information on how many people have already been infected (probably very few), how many people there are in society and their demographics – age, jobs, household size and so on. We also need information on healthcare capacity and to what extent that can be expanded.
Second, the models themselves are always simplified versions of reality, an attempt to identify the major dynamics so that the scale of analysis is manageable. The first models attempted were pretty much pen-and-paper maths. These were quickly replaced by computer simulations repurposed from pandemic influenza. So the trick is to pick out those major factors and make accurate assumptions about them. At the most basic level of analysis, the population can be divided into susceptible people, infected people and recovered people (which includes those who have died). The aim is to understand how people might progress through these three groups.
Third, there’s some educated guesses to be made, too. For example, in the highly influential Imperial College London paper, ‘Report 9’, published on 16 March 2020, scenarios were modelled based on Covid having a reproduction number – that is, the number of people each infected person goes on to infect – of between 2.0 and 2.6. But that actually turned out to be too low – the ‘R’ number was probably at least 3 for the original virus, and it seems to be much higher for some subsequent variants like the Kent / English variant (now officially the ‘Alpha’ variant) and the Indian variant (now called ‘Delta’).
These are serious difficulties for modellers, and the interpretation of results from modelling has to take into account the assumptions being made. Ideally, these assumptions are constantly reconsidered.
The conclusions of ‘Report 9’, alongside other modelling efforts, were controversial at the time. The Imperial report suggested that half a million people could die if the government did nothing and that healthcare services would be massively overwhelmed, with hundreds of thousands of deaths, if the government simply isolated the infected plus their close contacts, and socially distanced the elderly.
Like many people, I was extremely sceptical about the Imperial report’s results. But whether they were too high or not, there’s little doubt now that they told us something important: that Covid was not a repeat of previous pandemic scares like swine flu; that it would be very serious; and that we needed to take drastic measures to stop its spread. Of course, we can argue about what those measures should have been at any particular moment, but we certainly needed to buy some time to develop vaccines and treatments (like dexamethasone), expand healthcare and testing capacity, and so on.
On the other hand, if what we conclude from models is wrong, we can get our policies wrong and misallocate resources (and restrict freedoms too much, too). For example, the early modelling had spooked the government sufficiently to start building field hospitals, named after Florence Nightingale, around the UK. But in fact, they were barely used at all – partly because hospitals were never as busy as first predicted and partly because the government couldn’t staff the Nightingales without drawing valuable resources away from actual hospitals, which simply expanded their own capacity instead.
In the autumn, we were told in Downing Street media briefings that deaths could hit 4,000 per day without a second lockdown. We did, of course, have a second and third lockdown, but the claims were still overblown, based on out-of-date data and assumptions. We never came close to 4,000 deaths per day, even with the much-more transmissible Alpha variant.
And the current roadmap, being judged right now, assumed that there would be considerably more pressure on hospitals (and ultimately, deaths) following the opening up of shops and pubs. It was this modelling that led to the very conservative roadmap in the first place. In fact, it seems it is only because of the rise of another, yet-more transmissible variant that we have seen cases and hospitalisations rising again. Given that 55 per cent of adults have now received two doses of vaccine, it seems very unlikely we will get another wave of hospitalisations on a par with the first two waves.
The experience of the initial Delta-variant hotspots seems to bear this out. While cases have been high, the link between cases and hospitalisations has been weakened. Where people have had two doses of vaccine, the risk of ending up on a ward has been relatively low. When we model what happens next, we need to make sure that these factors are properly taken into account.
Sadly, past form suggests that might not be the case, and that policy will be guided by the most pessimistic modelling rather than a clear-eyed view of what comes next.
Rob Lyons is a spiked columnist.
Picture by: Getty.
To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.