Donate

Eroding expertise

Scientists should listen to public concerns, not pander to them.

Bill Durodié

Topics Science & Tech

In March 2002 I attended the first ‘National Forum for Science’ at the Royal Society in London.

The aim was to promote ‘diversity’ and establish a ‘dialogue’ between scientists and the general public – so that ‘ordinary people’ could feel like they had ‘made a difference’ to the scientific decision-making process.

We were even encouraged to make pledges, which were later mailed to attendees. One attendee pledged to ‘Arrange public meetings’, another to ‘Set up a club’ – while one reassured us that he would ‘Continue my PhD’. How that would help people to better understand scientific issues is anyone’s guess.

Why has the Royal Society, one of the world’s oldest and most prestigious science institutions, shifted its focus from the unapologetic motto it adopted in 1660 – ‘Nullius in Verba’ (On the word of no one) – to the populist approach ‘Listen to everyone’?

The context for this development is society’s increased awareness of risk. A consciousness of risk now impacts on everyday activities, from school bullying to sunbathing, and from sex to food. In particular, we tend to focus on the alleged risks relating to the application of science and new technologies.

In response to our increasingly negative attitude towards science, many groups are examining how to reconnect scientists to the public. In 1998, a report by the Royal Commission on Environmental Pollution argued that public ‘values’ should be incorporated into the scientific decision-making process.

This idea was expanded on by the House of Lords Science and Society report of 2000, and has been repeated as a mantra in documents published by the Parliamentary Office for Science and Technology and the Economic and Social Research Council. How ‘public values’ are to be ascertained or assessed in relation to scientific evidence remains to be explained.

Scientists have also been encouraged to adopt the ‘precautionary principle’, which suggests that in the absence of definitive proof of safety we should always err on the side of caution. This can have a paralysing effect on scientists’ work, forcing them to emphasise uncertainty at every opportunity. The precautionary principle encourages ‘what if?’ scenarios rather than ‘what is’ scenarios, and tends to elevate worst-case data over hard evidence.

The precautionary principle also allows a new breed of expert to play a prominent role in the policy-making process. So while we’re often told that we can no longer trust scientists, we are asked to trust professional risk communicators, ethicists, relatives of victims, the bereaved and others who have become ‘experts’ in our cautious age.

Consider the inquiry into BSE (bovine spongiform encephalopathy, otherwise known as mad cow disease), led by Lord Philips. The inquiry gave a prominent role to relatives of vCJD (variant Creutzfeldt-Jakob disease) victims. But while official recognition of these families may have reflected public acknowledgement of their distress, their involvement in wider aspects of the inquiry devalued scientific and clinical expertise.

The truth is that the experience of losing a relative yields no insight into the nature of a disease, or any great wisdom about how to prevent or treat it. Yet victims and relatives play increasingly influential roles in the discussion of science and medicine. So parents of autistic children have recently been advising us on the safety (or otherwise) of the MMR vaccine. And behind the scenes lie a plethora of committees, stakeholder forums and consultative workshops all seeking to advise scientists on how to conduct their work.

Of course scientists and politicians should listen to public concerns and sometimes address them – but they shouldn’t pander to them. In many respects, a dialogue with the public requires a relationship of equals – yet science is made up of expert specialisms, on which very few can comment authoritatively. Simplifying these specialisms to incorporate ‘lay knowledge’ or public ‘values’ is problematic for four reasons:

– it patronises the public by simplifying evidence to make it ‘accessible’;

– it demoralises the actual experts involved by marginalising their work;

– it panders to the conceit of those who purport to know what the public wants;

– it serves to deflect blame from those with a real responsibility to be accountable.

Science requires dispassionate distance and detachment from the objects of its study – meaning it isn’t always suited to ‘accessibility’, ‘inclusion’ and ‘dialogue’. Science may help to inform decision-making in society more broadly, but it is not a democratic process. The Earth goes round the sun, regardless of how it looks to however many people.

Indeed, we owe a debt to those who, in the spirit of exploration and experimentation, have been prepared to put their heads above the parapet and to fly in the face of popular perception – to challenge widely held views with newly discovered facts and evidence.

Today there appears to be a confusion between two very different problems: the demise of public participation and hence political legitimacy, and a public disenchantment with, and confusion about, science. It would appear that nervous politicians and officials are attempting to reinvigorate the former by establishing a dialogue in the latter.

But science is unsuited to such deliberative processes. Today’s approach tends to flag up the effect of science upon society with little consideration for the impact of society upon science. A society that has lost faith in ambition and imagination will be bad for science. A healthy debate about science would emerge from healthy political debate more broadly, not vice versa.

Public dialogue in scientific decision-making looks like the conceit of a minority who are staking out their claim to form a new, moral elite. It postures as radical and democratic, but considering that ethics committees and special agencies have their members appointed by the government, it actually invites greater political interference in scientific matters.

This is bad for both science and society.

Bill Durodié is director of the International Centre for Security Analysis at King’s College London. He is the author of Poisonous Dummies: European Risk Regulation after BSE, European Science and Environment Forum, 1999 (download this book (.pdf 679 KB)). He is also a contributor to Science: Can You Trust the Experts?, Hodder Murray, 2002 (buy this book from Amazon (UK) or Amazon (USA)); and Rethinking Risk and the Precautionary Principle, Butterworth-Heinemann, 2000 (buy this book from Amazon (UK) or Amazon (USA)).

This article is a transcript of a talk given for the Institute of Ideas at the Edinburgh Book Festival.

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Science & Tech

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today