Donate

spiked statistics

A statistician's guide to the use and abuse of stats.

Toby Andrew

Topics Politics

When dealing with statistics, you should bear three principles in mind:

1) You cannot prove anything with statistics.

Statistics should be an honourable science – but instead it has long been associated with Mark Twain’s comment, ‘lies, damned lies and statistics’. Statistics, it is asserted, can be used to prove anything.

But while figures can be presented in a variety of ways – and it is true that the most favourable view is often selected – in fact you cannot prove anything with statistics. All you can do is rule out the improbable.

The word ‘statistics’ originated as a summary term for tables of numbers describing quantitative features of ‘the state’. Today, the word refers to a hybrid branch of mathematics that deals with inherent variation and probability, involving logic and judgement as much as maths. Where causality is indistinct or multifaceted, such as with biology and (even more so) with social sciences, statistics can be a powerful descriptive and analytical tool. But helping describe and analyse a phenomenon is not the same as proving it, and this distinction must be kept in mind.

2) Correlation does not mean causation.

That correlation does not mean causation is a textbook platitude, but true. Many related phenomena are related in no meaningful way at all. Over some years a person’s age might be perfectly correlated with the price of petrol, but nobody would say one is the cause of the other.

Yet in many cases, correlation and causation are collapsed together. Consider the following statement on income inequality and health from the British Medical Journal: ‘Income inequality has spill-over effects on society at large, including increased rates of crime and violence, impeded productivity and economic growth, and the impaired functioning of representative democracy.’ (1) While there is epidemiological evidence that health inequality is associated with income, there is no evidence whatsoever to claim that income inequality causes crime, economic decline or dysfunctional democracy.

3) You cannot prove a negative

A common demand made today is that a new technology or a proposed research project should be proved risk-free before it is allowed to take place. This is not just unreasonable – it is impossible. Apart from deductive logic, it is not possible to prove a theory – whether negative (for example, ‘risk-free’) or positive (for example, evolution).

Theories can be falsified, but not proved beyond all doubt. Like the existence of God, it is not possible to prove deity does not exist. Evidence can only be diligently accumulated that either supports or undermines a particular proposition or theory.

(1) See Socioeconomic determinants of health, British Medical Journal, 5 April 1997

How to assess a study critically:

1) Have the results been published and peer reviewed?

News headlines often report research ‘findings’ that have not been published. So Decode Genetics of Iceland and Roche Genetics of Switzerland claimed in October 2000 to have found a gene involved in a mechanism that contributes to schizophrenia (1), but without having first published the research. As a publicity stunt, this is particularly unwelcome among geneticists (it is also unwise, given the history of many false findings linked to schizophrenia). And even when results are published, they still need to be duplicated in order to be considered reliable – particularly with genetic linkage analyses.

2) Refer to the source publication.

This is the only guaranteed way to assess the quality of the research and whether it has been accurately reported. But even with accurate reporting, in order to save space, there is a tendency to emphasise the results, often presented as facts, with little or no regard as to how they were obtained. Statistics is often viewed as little more than the analysis of data and presentation of results. While reasonable for the layman, this is inexcusable for researchers – the thought and clarity that goes into the design of the study is far more important than apt statistical analysis and clear presentation of results at the end of a study.

3) Check the study design.

Why and how were the data collected? What were the original motivations and research questions? What was the experimental or observational design of the study? Does the design contribute to an answer unambiguously or does it raise problems of its own? If the study is a survey, were the questions or the social context likely to elicit some responses more than others?

Take a Wellcome survey published in February 2001 on the role of scientists in public debate. It found that ‘Nine in 10 scientists believe that the public needs to know about the social and ethical implications of scientific research’ (2). Wellcome Trust director Mike Dexter concluded that ‘the next generation of scientists will need to be able, as well as willing, communicators’. But as the report itself points out, the survey followed closely on from controversies over GM foods, research using embryos and accusations of science being aloof from the public. What scientist in their right mind would admit to such a charge by replying ‘no’ to the MORI poller?

4) Is the sample representative of the population of interest?

Sample survey statistics is a field in its own right. The rationale for this approach is that one can obtain an accurate picture about a population without having to interview every last person, if, and only if, the sample is taken at random.

What is important here is the sample size, and also whether the sample is representative of the population of interest. So it would be misleading to claim, as some have done, to have found the prevalence of HIV among the general London population, for example, if the sample has been drawn from genitourinary medicine clinics (3).

5) Data analysis and presentation of results.

If the design has been adequately addressed, have the appropriate analyses been correctly conducted and the results presented clearly in a manner that is not misleading? One way exaggerated results can be obtained is to emphasise relative figures at the expense of absolute figures – in effect, citing figures out of context.

The relationship between health and social inequality is often exaggerated in this way. The relative infant mortality for manual workers in Britain is about twice that of professional classes – an inequality that has remained constant for most of the past century. But looking at it in this way omits one crucial fact – that infant mortality declined by at least five times for all classes during the twentieth century (4).

6) Is there a different interpretation to the one being presented for the results?

And do the results bear out the ensuing discussion? In the past a discrepancy between results and conclusions was as likely to arise from honest misinterpretation of the data as from unscrupulous interpretation. By contrast, public policy debate today is increasingly fact-free, actively justified by the argument that public perception or feelings are as important as reality.

This can most clearly be seen in ongoing discussions about crime, fears that scientific research is ‘running ahead’ of public opinion, or the panic about the NHS body parts scandal – all issues covered on spiked.

(1) Gene find offers hope to mentally ill, Guardian 21 October 2000

(2) See ‘The role of scientists in public debate’ at the MORI website

(3) See the study HIV infection concentrated in London in the BMJ, January 1995

For critical comments on this study, see some letters in response

(4) See You poor things, by Toby Andrew, LM magazine, October 1997

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Politics

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today