Donate

Peer review and ‘media science’

How do we tell good science from bad? By looking at how it is published.

Ellen Raphael

Topics Science & Tech

Since November 2005, the UK media has keenly followed the story of top stem cell scientist, Hwang Woo-Suk, who was exposed for faking his results. Earlier in the year Hwang had published a paper in the peer-reviewed journal Science that reported success in creating human embryonic stem cells by cloning – a huge scientific advance that was widely applauded. However, data used in the article turned out to be fabricated and Hwang has since been mired in controversy, culminating in his dismissal from the University of South Korea this week.

The rise and fall of a renowned scientist working in a cutting-edge field is understandably shocking and the story has made headlines across the world. In the UK, scientists have been invited to discuss the ins and outs of the matter on every major news outlet and it will be some time before advances in cloning can be discussed without reference being made to Hwang.

It is not surprising, then, that as part of Social Science Week and National Science Week, the Economic and Social Research Council (ESRC) Genomics Forum on 16 March organised a panel discussion to contemplate the impact fraud has on trust between scientists, policymakers and the public. The panel comprised of Ben Goldacre, a doctor and the Guardian‘s ‘Bad Science’ columnist; Steve Fuller, professor of sociology at Warwick University; Dr Stephen Minger of King’s College London; and Jeremy Webb, editor of New Scientist.

The discussion was in many ways reassuring. As the panellists talked about serious scientific fraud it became clear from the limited examples that it is very rare, which is why it hits the front pages when it does happen. It was also apparent that published fraud does not particularly impinge on the public’s view of science. Most cases are forgotten very quickly and don’t make much of a dent in the public consciousness (anyone remember Jan Hendrik Schön?) (1). The most damaging consequences are to scientists working in the area, whose research may be delayed by being sent on a wild goose chase, and to the reputations of scientific journals.

That scientists are preoccupied with fraud is understandable, as they need an accurate historical record of science for work to advance. But it is unfortunate that the first time anyone hears about the scientific method and the peer review process is in discussions like this. It is like learning about the jury system through a high-profile miscarriage of justice.

I spent some time last year discussing how to help people distinguish between dubious claims and good science. I spoke with people who deal with the fallout from dubious science stories, including patient groups, teachers, medical research charities, GPs and pharmacists, and found a huge appetite for guidance in helping people sort opinion from conjecture. The result of these conversations is a short guide to peer review, titled I Don’t Know What to Believe… (2), which lets people in on the inner workings of science publishing.

Peer review, the process of scientists checking one another’s work for validity, significance and originality, prior to publication, is an important arbiter of scientific quality – it is an essential dividing line for judging what is deemed scientific and what is speculation and opinion. Few other professions have an in-built quality checking process before work is made public. Peer review is also a creative process; it is not merely about ticking boxes and rubber-stamping a paper but is about critical engagement with someone else’s work.

As we have seen, peer review will not necessarily detect if someone deliberately sets out to falsify data. There is often no way of knowing this until the paper is published and others in the scientific community have the opportunity to scrutinise the work. However, if Hwang Woo-Suk’s paper hadn’t been peer reviewed, and he had gone directly to the media with his ‘results’, it would have taken far longer than six months for the fraud to be discovered and rectified.

Understanding what peer review is and how it works can also help us weed out claims made by ‘media scientists’, who bypass the peer review system and take their research findings directly to the media or the internet. A recent web search found numerous pseudo-science claims – including the claims that lemon juice can be used as a cure for AIDS, that a special diet can cure terminally ill cancer patients, that the use of tinfoil causes Alzheimer’s, and that diabetes can be cured by repairing faulty blood sugar control systems. Tap ‘alternative cures’ followed by your disease of choice into a web browser and you will no doubt be able to find many more.

Publicising the status of research cannot prevent people who have already made their mind up from searching the internet for ‘evidence‘ to back up their claims. It can, though, make society more self-conscious about what it chooses to believe.

This is not to say that peer review is perfect; it isn’t. Like any large-scale system of judgement it can go wrong. But unpublished research is far worse: scientists can’t repeat or use it, and as a society we can’t base decisions on work that has a high chance of being flawed, sometimes in the most elementary ways.

Scientists say if they didn’t have peer review they would have to invent it. Despite this, they do little to promote the process, often concentrating on its failures rather than its successes. At a time when ‘transparency’ is the key buzzword, scientists seem to think that lambasting the system and highlighting its flaws will engender public confidence and trust. Unfortunately, the reverse is often true and scientists’ reticence to explain the system does not help people trying to work out whether to believe the latest research claim they read.

When the next case of published scientific fraud hits the headlines, scientists should proceed with caution and make a distinction between a highly technical discussion among scientists and what the public needs to know. We need to be open and honest about what standards to expect from published science and explain that it sometimes goes wrong. When raking over the rubbish we need to be clear it’s just that – rubbish. In no other sector would we do this without explaining what should happen and what the peer review system does well.

Ellen Raphael is programme manager at Sense About Science.

(1) Jan Hendrik Schön is a German physicist who worked for Bell Laboratories and precipitated similar fears that science had been dealt an almighty blow. In 2001, he announced in Nature that, using a thin layer of organic dye molecules, he had assembled an electric circuit that, when acted on by an electric current, behaved as a transistor. This would have had huge implications and could have led a move away from silicon-based electronics towards organic electronics. Schön had papers published in the reputable journals Nature and Science, and at the end of an extensive investigation, Science withdrew eight of Schön’s papers and Nature seven.

(2) Download I don’t know what to believe… here [pdf format].

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Science & Tech

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today