Donate

Creative accounting for facts

Statistics make for scary headlines – but a few simple errors often lie behind them.

Nell Barrie

Topics Politics

Statistics seem to be everywhere these days. From high street sales to the latest scare stories, numbers fill the news. They make for quick, snappy headlines and also smack of scientific rigour – number is not just someone’s opinion, it’s a fact. Or is it?

‘Estimates’

One story reporting the selective abortion of female foetuses in India is a case in point. The headline read ‘10 million girl foetuses aborted in India’. Sounds pretty shocking, doesn’t it? Well, the story was shocking, but not quite so much as the headline might lead you to believe (1).

The number actually came from one of the researchers who had published the paper. Professor Prabhat Jha was quoted as saying: ‘If this practice [selective abortion following an ultrasound to determine sex] has been common for most of the past two decades since access to ultrasound became widespread, then a figure of 10m missing female births would not be unreasonable.’ So actually the figure of 10 million was a ‘not unreasonable’ estimate, hardly the stuff of scientific fact.

In September 2005, a number of papers and websites reported that bird flu could kill up to 150 million people worldwide (2). This makes for a pretty scary headline, but scratch the surface and the truth emerges. A UN health official had said that a flu pandemic could kill ‘between 5m and 150m’. The official, Dr David Nabarro, later said: ‘My reason for giving the higher figure is simply that I want to be sure that when this next flu pandemic does come along, that we are prepared for the worst as well as for the mildest.’

Links everywhere

Another common type of statistics is the purported ‘link’ story. Every day we’re told of possible links between, say, violent games and aggressive behaviour, or between fish oil and concentration levels.

But what does it actually mean to say, for example, that violent computer games are ‘linked’ with aggressive behaviour? It’s not enough just to take a random section of the population, measure how many of them play violent games and see if those who do are also more aggressive, because you don’t know what other factors could be at work. A correlation between violent games and aggression does not mean that the games are causing the aggressive behaviour; perhaps more aggressive people simply gravitate towards violent games.

You might think that including more factors in your experiment would help. In the case of violent games and aggression, you could measure other things like aggression levels early in life before playing violent games, and aggression levels later on. The trouble with this is that the more factors you include in a statistical analysis, the higher the chance of one of them appearing to be significant just by accident. The more factors you measure and include, the higher the chance of producing a false link.

Significance

Statistics are all about ‘significance’. If a result is significant, it means that you’re pretty sure that it’s unlikely to have happened by chance. When scientists found out that people who smoked got lung cancer, the result was significant: it wasn’t just a coincidence, something was linking the cigarettes and the disease.

The level of statistical significance in a study is usually set (somewhat arbitrarily) at five per cent. A result is ‘significant’ if the probability of getting that result by chance is five per cent or less. So, for example, if you were looking at the effect of a fertiliser on plant growth, you would need two groups of plants: one which had fertiliser applied to it, and one which didn’t. You could then compare the average growth of the two groups after a period of time. If you find the fertilised group has grown ‘significantly’ better, this means that the chances of that amount of extra growth occurring by chance are five per cent or less.

But the problem comes when you’re looking at more than one factor. Maybe better soil or extra water improved the growth of those plants. You could measure these variables too. But a five per cent (or one in 20) probability of a significant result occurring by chance means that for every 20 variables you measure, you’re expecting one of them to appear to be significant when actually it isn’t.

This kind of false significance (called a type 1 error) is especially problematic when investigating mysterious conditions such as cot death. It’s tempting to measure every possible factor (position of baby, whether the mother smoked or breastfed, whether the baby was sleeping in the parents’ bed, type of bedding) and put all those measurements into your statistical analysis, but this will increase the chances of making a serious mistake.

Even if one study suggests a link, this link must be replicated by other scientists in more experiments before it can be accepted. So most ‘link’ stories of this kind don’t really tell you much at all, except that further work is needed to confirm or disprove whatever is being reported.

A story on homeopathy is another example of poor statistical practice. The BBC reported that a study had shown the benefits of homeopathy to NHS patients. Seventy per cent of patients taking homeopathic remedies had reported positive health effects. It turned out that all that had happened was that people receiving homeopathic treatment were simply asked if they felt better for it or not. This ignores a major principle of tests like these – eliminating bias.

In a drug trial, some patients would be given a placebo – for example, a sugar pill. Others will be given the real drug. Nobody knows which group they are in, often not even the doctors giving out the pills. Because no tester knows if they’ve been given the real drug or not, they’re far less likely to convince themselves they feel better. But in this homeopathy study, their desire to feel well could have had a major effect on the outcome of the experiment.

Telling half the story

The press is often guilty of misrepresenting statistics by only telling half the story, leaving out key facts that would clarify the issue for their readers. In 1996 the World Wildlife Fund (WWF) reported that the rate of loss of Amazonian rainforest had increased by 34 per cent since 1992. But the facts were somewhat different: 1994-5 was actually a peak year for deforestation, with 0.81 per cent of the Amazon deforested. In 1998-9 deforestation decreased by almost half the previous year’s levels to only 0.47 per cent.

Another way to only tell half the story is by using relative rather than absolute risk in your statistics. Stories appear claiming ‘smoking doubles risk of cot death’, but these stories are often not as worrying as they sound. Absolute risk is the total risk over a period of time (often the average lifespan) of a given event occurring (such as getting cancer). Relative risk is used to compare risk in two different groups. The headlines often mask the fact that the background (absolute) risk levels are pretty low. For example cot death affects about one in 2000 births in Europe, so a doubling of that risk still only means a baby has a one in 1000 chance of dying, or 0.1 per cent (4).

Luckily it’s usually not difficult to work out what’s going on behind the headlines. Sometimes just reading the whole article is enough to give you the real story, and if it’s still not clear where the truth lies, that probably means that the jury’s still out.

Nell Barrie has worked as an intern at spiked.

Read on:

spiked statistics, by Toby Andrew

An epidemic of epidemiology, by Rob Lyons

(1) 10 million girl foetuses aborted in India, Guardian, 9 January 2006

(2) Bird flu ‘could kill 150m people’, BBC News, 30 September 2005

(3) New study is boost to homeopathy, BBC News, 21 November 2005

(4) Pregnancy clue to cot death risk, 2 September 2004

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Politics

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today