Why do we fear things that are as rare as getting struck by lightning?
While few of us think about any real, cumulative risk that we might face (for example, the chances of someone our age dying within one calendar year from today), we are increasingly anxious about imperceptible risks – those that usually rank around the ‘getting struck by lightning’ figures.
We often feel uncertain about the data relating to exposure to particular events, such as infections, toxicants and environmental changes. This uncertainty can induce irrational responses – and such responses can create further problems.
Consider the data on the effects of immunisation against the acute specific fevers of infancy and childhood. It is clear that such immunisations have done much to tackle the morbidity and mortality associated with these conditions, and have had a profound effect on human life expectancy. Yet we have become susceptible to panics about the safety of immunisation. This can be seen in the ongoing controversy about the MMR (measles, mumps, rubella) vaccine. We know that measles is highly contagious and that it occurs in outbreaks in communities with immunisation rates below 75 per cent.
There are clear data about the dangers of a measles outbreak. The illness will be accompanied by ear infection in 1/20 cases; by pneumonia or bronchitis in 1/25 cases (with some permanent sequelae in terms of lung disease); by convulsions in 1/200 cases; meningitis/encephalitis in 1/1000 cases; death in 1/2500-4000 cases; and by the terrible problem of sub-acute sclerosing panencephalitis in 1/8000 children.
And yet, people’s anxiety about a poorly supported hypothesis – which claims that there is an association between the MMR vaccine and autism – has led to a reduction in immunisation rates. As a result, there could even be a measles outbreak in the UK, following outbreaks in Sweden, the Netherlands and Ireland. This is a truly damaging response to problematic data.
This looks like a repeat of an old story. Mistaken concerns about the pertussis vaccine and encephalitis in the 1970s caused a drop in immunisation rates, from 79 to 31 per cent – resulting in an epidemic that claimed 28 lives.
There are, however, some real, if limited, problems with vaccines. Poliomyelitis was successfully eradicated in the UK after the introduction of the Salk vaccine and the use of subsequent attenuated live vaccines. But the dangers of live vaccines were made clear in a recent outbreak of paralytic polio in the Dominican Republic and Haiti, resulting from use of the oral polio vaccine.
The problem is that a live vaccine has the capacity to alter in a way that allows the possibility of infection in particular individuals – but this occurs at a rate of less than 1/2,000,000 doses. From rationally assessing the data in this way, we can see that the benefits of immunisation far outweigh the risks – even though it is impossible to intervene in a way that produces benefits for all and problems for none.
How are the public’s views moulded on these kind of scientific and health issues? In a thoughtful review of how public opinion has been formed in relation to genetics, Celeste Condit points out that while many people talk up the ‘public’s view’, little serious discussion or engagement has taken place (1). There is no grasp of the concept that ‘public opinion’ is rarely a constructed collective view, and that there are often profound differences in opinion across society.
The notion that the public is the aggregate of all the individuals in a given territory is strongly influenced by our particular view of representative democracy. Surely it is more realistic to regard the public as a collective concept, a grouping that avoids the influence of narrow personal interests in order to achieve benefits for the community?
The role of special interest or pressure groups seems to be important in influencing people’s views (2). There is a substantial body of research which claims that ‘what gets said is what matters’ (3) – and because ‘what is said’ is increasingly the expressed views of commentators, chat-show hosts and others, rather than the considered statements of researchers or (dread word) experts, special interests appear to have a disproportionate influence in matters relating to science and new technologies.
This lack of leadership from properly informed opinion leads to the development of the precautionary principle. Scientists Soren Holm and John Harris define the precautionary principle as follows: ‘When an activity raises threats of serious or irreversible harm to human health or the environment, precautionary measures that prevent the possibility of harm shall be taken even if the causal link between the activity and the possible harm has not been proven or the causal link is weak and the harm is unlikely to occur.’ (4)
This principle should never be used when evaluating data relating to large populations and low risks. The precautionary principle arbitrarily changes the weight that is given to evidence from different investigations on an uncertain basis – and it represents the antithesis of science.
Professor Sir Colin Berry is a pathologist. He spoke in the closing plenary ‘The future of risk’ at spiked’s Panic Attack conference at the Royal Institution in London on 9 May 2003.
(1) The Meanings of the Gene, Celeste Condit, 2001
(2) Page and Shapiro, 1992
(3) Vatz and Weinberg, 1987; Hogan, 1994
(4) Precautionary principle stifles discovery, Soren Holm and John Harris, Nature, 29 July 1999
To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.