Donate

A genetically modified survey

The UK government consultation on GM food took an unscientific approach to gauging public opinion.

Scott Campbell

Topics Science & Tech

The government’s public consultation exercise on GM food, ‘GM Nation?’, was broadly accepted as an accurate reflection of the public’s attitudes (a letter to Nature published by myself and my colleague Dr Ellen Townsend provided one of the few critical analyses of the report).

In policy circles there has been talk of performing similar exercises in future in order to increase public participation in democracy, at a time when cynicism about politics is supposedly at an all-time high. In fact, though, the GM debate was a travesty, and serves as a model of how not to use social science in the interests of democracy.

The ‘GM Nation?’ report concluded that the general public is overwhelmingly against GM technology, with feelings ranging from ‘suspicion and scepticism, to hostility and rejection’; there are, it was said, ‘many more people who are cautious, suspicious or outrightly hostile about GM crops than there are supportive towards them’. These conclusions were based on quantitative questionnaires answered by 36,500 people, as well as by additional comments received. (About half of the responses came by mail, and half using the ‘GM Nation?’ website.) Such a large sample certainly looks impressive, considering that a lot of social science and market research draws conclusions on the basis of samples of only a few hundred people.

But the large size of the sample does not overcome one glaring problem with it. It is, as even its authors concede, a self-selected sample, and therefore is almost certainly not random. As a self-selected sample, it is probably comprised mostly of those with strong opinions on the subject. After all, if you don’t give a damn, why would you go to the trouble of writing a letter to a survey unit telling them that you don’t give a damn? The fact that tens of thousands of the sort of people who get worked up about GM wrote in to say that they get worked up about it tells us nothing much about the rest of the population, especially when one considers that none of the ‘GM Nation?’ budget was spent on advertising, and so most of the people who knew about it (before the results hit the headlines) were the activists.

After all, 36,500 people amounts to roughly one out of every 2,000 people in Britain, and you’d hardly have to ask 2,000 people before you got someone who was strongly against GM. Environmental groups such as Greenpeace and Friends of the Earth mounted concerted campaigns to get their members to take part in ‘GM Nation?’ (and newspapers reported complaints that the public meetings held as part of the process were overwhelmed by anti-GM activists).

Consider that over a million people in Britain took to the streets against the Iraq war, but proper surveys showed us that there was not an overwhelming majority of people against the war. A survey about war attitudes that only asked people on these marches wouldn’t be taken seriously – but the ‘GM Nation?’ survey amounted to little more than that. So we have no right to take these results to represent the general population. No decent scientific journal would take these results seriously, and there is no reason why anyone else should either.

Yet not all the blame can be laid at the feet of the activists, because it was the very nature of the government’s debate process that encouraged them to act as they did. Any debate about an issue that provokes strong feelings in a minority, while the majority is less interested, is bound to attract the former group, but not the latter. For that reason we cannot take such debates as a good indicator of the view of the rest of the population, any more than gauging attitudes among the audience at a meeting on ‘The fascist effects of Western capitalism’ gives you a picture of what the wider population thinks about Western capitalism.

The authors of the report tried to remedy these limitations by picking out a random sample of participants to see if there were any standardised responses in the comments that were being sent in – which there weren’t. But this tells us little. People who are against GM are perfectly capable of expressing their own opinions. Hence, we cannot assume the sample is representative on the basis of this check.

The report’s authors had acknowledged that the views of those who made the effort to take part in ‘GM Nation?’ ‘might not be representative of the general population’. So a ‘narrow-but-deep’ study was commissioned from another company. This consisted of asking 78 randomly chosen people 13 of the same questions that had been asked of the larger group (the latter they labelled the ‘open debate’ group). So this narrow-but-deep group functioned as a ‘control group’ (or, more accurately, a ‘measure of reliability’) on the open debate group, to see if there was a ‘silent majority with different views’.

According to the report, apart from some minor differences, the control group results backed up the results from the open debate group – the general public, the authors said, is not ‘a completely different audience with different values and attitudes from an unrepresentative activist minority’. Was this true? Well, no journalist was likely to find out, as no table had been provided to present the differences, and the actual results of the two groups were buried deep within the hundreds of pages of supporting documents, far apart from each other (with some of the data actually missing). Suspecting that some inconvenient data had been deliberately hidden, I gathered the relevant material together myself. Once these results were compared side-by-side, stunning differences emerge. These can be seen in Table 1.

[view table here]

Some of the questions here reveal the low quality of the survey. For example, question 2 – ‘I am concerned about the potential negative impact of GM crops on the environment’ – is exactly the sort of question that even a high school social studies student could tell you should not be used. It’s vague, and practically begs to be answered in the affirmative. But some questions are more straightforward, and the differences between the groups on these questions are huge.

For instance, on question 5 – ‘I would be happy to eat GM food’ – 86 per cent of the open debate group disagreed, but this went all the way down to only 35 per cent in the random group. Hardly anyone – only eight per cent – in the open debate group said yes to this, but this increased to over a third of the random group, 36 per cent. On whether GM crops would result in less pesticide, the 71 per cent disagreement in the open debate group went down to 12 per cent in the random group, while agreement went up from 14 per cent to 54 per cent. Seventy-nine per cent of the open debate group thought that GM wouldn’t help British farmers compete, but this collapsed to only 23 per cent in the random group. Meanwhile, the people who thought GM would help them compete went up from nine per cent to 40 per cent.

Would GM provide cheaper food? Seventy per cent said no in the open debate group, but only 14 per cent said no in the random group – whereas people who said yes increased from 14 per cent to 43 per cent. ‘Does GM interfere with nature in an unacceptable way?’ – the 84 per cent yes vote collapsed to 37 per cent. ‘Could it benefit people in developing countries?’ – 75 per cent against became only 18 per cent against, whereas the percentage in favour went from 13 per cent in the open debate group to 50 per cent in the random group.

So on over half the questions – specifically, the less vague and leading questions – massive differences like these resulted. One doesn’t need a PhD to see that these results completely discredit the results of the open debate. The randomly selected control group did its job, meaning that the results of the larger survey should have been discarded. They cannot be said to be representative of what the public in the UK thinks about GM food. But nowhere is this admitted in the report; in fact, the opposite is claimed – it is said that the control group’s responses mostly bear out the main results.

So we have a report based on a method that no decent empirical researcher would consider adequate. The survey’s own control group then blows these results out of the water completely. The report should have been thrown in the dustbin, yet it gets released to the general public as holy gospel. Tactics like this do get discussed in textbooks on scientific method, but only in the chapter on ethics.

Despite our letter in Nature, and the widespread talk about our letter that we heard was going on in government and industry circles, none of the people involved with the report have provided any sort of serious response. Nature published a reply from a member of the ‘GM Nation?’ steering board, Robin Grove-White, professor at the Institute for Environment, Philosophy and Public Policy at Lancaster University. But his response merely mouthed platitudes like, ‘No one would claim that the GM debate was a flawless exercise, though, like others involved, I regard it as time fruitfully spent. It will be and should be evaluated rigorously, not least for lessons that can be learned for the benefit of similar exercises in the future’. This is just civil service-style waffle. No attempt was made to address the serious points we had made.

In fact, the usual response at conferences and talks held in the aftermath of ‘GM Nation?’ was that the results were supposed to be ‘qualitative, not quantitative’. We heard via Nature editors that some members of the steering board were taking this line as well (although other members apparently agreed with our comments).

This is a classic social science fudge. The survey was set up to record masses of quantitative data, as well as qualitative data in the form of written comments. This quantitative data was then presented as just that, in the form of tables and graphs, using precise numbers (and some less precise quantities as well, in sentences such as ‘Most people are worried about GM’). One cannot then turn around and say that it is unfair to criticise the survey on quantitative grounds because it isn’t supposed to be that sort of thing. Whatever the original intentions were, quantitative data was what was collected, analysed, reported, and commented on by the media. Take away the quantitative results, and you have very little of significance – merely a record of some views on GM which were well known already.

So how should governments work out what public attitudes are? The best way is to use the tried-and-tested technique of random sampling. One doesn’t need 36,500 people to determine attitudes if the sample is random. However, problems arise even here. One might, for example, send out questionnaires to randomly chosen members of the public, and many reputable academic studies on attitudes to GM have done this. However, in most cases the response rates are very low – in fact, response rates as low as 25 per cent have been reported. Most of those people who responded are probably going to be those with a beef against GM. So even if one starts with a random sample, the sample can end up being greatly biased by way of the limited response.

What is needed is what I call a ‘topic blind’ recruitment strategy, where random people agree to provide their views on what they are told is a general current issue – before they know what the actual issue is. That way, much of the self-selecting is prevented. Dr Townsend and I have done a careful topic-blind study of 100 people, which will be appearing in the journal Risk Analysis. This presents a different picture than that presented by ‘GM Nation?’. In fact, its results are more like the results of the narrow-but-deep sample. About 50 per cent of people intend to buy GM food, and 50 per cent do not. Even among the latter group though, attitudes are not that set against GM food: 87 per cent of that group were happy to taste what they thought was GM food.

It is also desirable if questions about GM food are embedded among questions about other current concerns – this is a strategy that has been used in risk perception research for decades now. The use of such a strategy means that participants will be unaware that GM food is the focus of the research; consequently, their responses are more likely to be reliable and realistic. Such a study has been carried out by Dr Townsend, which will also be published this year in Risk Analysis. The results show that worries about GM rank very low compared to other worries.

So public participation exercises conducted along the lines of ‘GM Nation?’ give us inaccurate pictures of public opinion on controversial issues. Despite this, groups that advocate more public participation are keen to extend the ‘GM Nation?’ approach to other issues. But because such public debates inevitably attract a skewed segment of the population, they cannot be used as a gauge on public opinion.

Moreover, using such exercises to decide issues of public policy is, in effect, to set up a ‘meddler’s charter’. Those who have strong views on the matter in question (as well as the necessary energy and affluence) will attend, while most members of the general public will not. This will inevitably result in the former group imposing their views on the rest of society, whatever the actual merits of these views.

A genuinely free and democratic society should simply allow people to make up their own minds as much as is practical. In the specific case of GM food, for example, as long as the relevant experts are satisfied that it is safe – and hundreds of studies have now been completed which show that it is (as well as the fact that GM food has been eaten in the USA for years now, with no discernable effects) – people should be left to decide for themselves whether or not to purchase it.

This is more democratic and liberal than setting up a spurious public debate that will inevitably be hijacked by activists, whose views are widely known and publicised anyway, on the pretext that this involves people in public policy.

Dr Scott Campbell is a lecturer in the philosophy department, and in the Institute for the Study of Genetics, Biorisks and Society (IGBiS), at the University of Nottingham. He has written for publications including The New Criterion, Partisan Review, The Skeptic, Philosophy, Philosophical Studies, and American Philosophical Quarterly. He created and ran the Skeptics in the Pub night in London, and runs his own opinion page, Blithering Bunny.

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Science & Tech

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today