Donate

Scaring into space

A new book by Martin Rees, the Astronomer Royal, gives humanity a 50/50 chance of survival.

Helene Guldberg

Topics Politics

In his new book, Martin Rees, Britain’s Astronomer Royal, offers to bet anybody $1000 that a bioterrorism incident will kill at least a million people before the year 2020. I happily took him up on that bet on Radio 4’s Start the Week the other Monday.

If we are both still alive, I am confident that I will – and Rees says he hopes that I do – pocket his cash.

But not only does he believe we are likely to witness the tragic effects of bioterrorism within our lifetime: Rees also believes that the future of all life on Earth is under threat. In his book Our Final Century: will the human race survive the twenty-first century?, published earlier this month, Rees concludes that our civilisation only stands a 50/50 chance of survival.

Yet despite giving us such lousy odds, Rees claims that he does not want to be seen as a doom-monger, and seems rather uneasy about the way that his book has been marketed. The US edition of his book is, he says, ‘melodramatic’. He cannot have his cake and eat it. The American title Our Final Hour: A Scientist’s Warning: how terror, error, and environmental disaster threaten humankind’s future in this century – on Earth and beyond may be something of a mouthful, but as a description of the book, it is rather accurate.

Rees, after all, is warning that we only have a 50 percent chance of escaping ‘an eternity filled with nothing but base matter’ (p8). He thinks that civilisation itself is threatened by twenty-first century technology. ‘Populations could be wiped out by lethal “engineered” airborne viruses… . We may even one day be threatened by rogue nanomachnies that replicate catastrophically, or by superintelligent computers’, he speculates on the very first page of the book.

Should we take these warnings seriously? The reviews of Our Final Century – which invariably describe its message as sobering – imply that we should. The book’s jacket instructs us that, when a leading scientist predicts that this could be our final century, ‘we could do well to take notice’. I beg to differ.

Martin Rees has quite rightly been described as ‘arguably the finest all-round theoretical physicist working today’ (1). His international eminence in the field of cosmology is without doubt. But his many warnings about hypothetical and, he admits, ‘improbable’, risks should not cause us too many sleepless nights. The threats are based on science fiction rather than fact.

According to Rees ‘bio-, cyber- and nanotechnology all offer exhilarating prospects. But there is a dark side: new science can have unintended consequences.’ (p vii) Of course, all science can lead to outcomes that have not been predicted. That is the nature of experimentation. But why should we now start organising society on the basis of avoiding the ‘unintended consequences’ of scientific and technological advance?

Rees’ warnings are not based on the current state of science, but on a multitude of scary ‘what if?’ scenarios. What if, for example, a lone lunatic or a terrorist organisation were able to genetically engineer the Ebola virus so as not to kill its victims too quickly, enabling those infected by the virus to pass it on to more people before meeting their own deaths?

Nobody can guarantee that this could never happen. But that is precisely the problem with ‘what if?’ scenarios. Nobody can guarantee that, in the future, someone somewhere might not build a spacecraft with the power to alter the orbit of the moon and send it crashing into Earth – but so far as I know, there is no danger of that happening in the near future.

Similarly, with bioterrorism. Leading virologists assure us that bioterrorism is, now and for the foreseeable future, one of the least effective ways of killing large numbers of people. It took the Aum Shinrikyo sect in Japan 10 years and £10million of research to try to develop an effective biological weapon – including Anthrax, Q Fever and botulinum. The sect failed. In the end, it released the nerve gas sarin in the Tokyo subway – one of the most densely populated places on Earth – killing 12 people.

Or take Rees’ threat that human beings may at some point in the future be deemed redundant by superintelligent computers. Right now we don’t even fully understand how a single brain-cell works – never mind designing a conscious, intelligent, thinking machine capable of outmanoeuvring the human race.

Proponents of today’s risk-averse culture present us with a false choice. In Rees’ view, we can carry on as usual: allowing for technological and scientific developments that bring incremental social benefits, but which create the conditions in which calamitous events can destroy our entire planet. Or we can pause: forsake some scientific and technological advance, and by doing so avert disaster.
Rees does acknowledge that we could miss out on the benefits of science and technology, asking: ‘How will we balance the multifarious prospective benefits from genetics, robotics, or nanotechnology against the risk (albeit smaller) of triggering disaster?’ But he does not have an answer. And his constant warnings about possible disaster only serve to heighten our fears about the consequences of change.
The bottom line is that we cannot live our lives, nor curtail innovation on the basis of such hypothetical future risks. And we could pay a very heavy price for taking on board this precautionary outlook, missing out on unimaginable social benefits.

People living 100 years ago could not have imagined our lives today – with the technological, scientific and medical advances we have witnessed. The advances have had tangible human consequences: at the turn of the twentieth century, life expectancy was less than half of what it is today. Out of every 1000 babies born, 150 died before they reached their first birthday. Today the infant mortality rate has dropped to fewer than five in every 1000.

Rees is not entirely averse to risk-taking. When it comes to space exploration, he argues, we need people who are prepared to accept high risks in pursuit of new frontiers. Why? Because ‘even a few pioneering groups, living independently of Earth, would offer a safeguard against the worst possible disaster – the foreclosing of intelligent life’s future through the extinction of all humankind. Humankind will remain vulnerable so long as it stays confined here on Earth. Once self-sustaining communities exist away from Earth – on the Moon, on Mars, or freely floating in space – our species will be invulnerable to even the worst global disaster’ (p170).

I’m all for space exploration, but let’s not give up on creating a world fit for people here on Earth. That means confronting today’s risk-aversion: what Mick Hume describes as ‘humanity’s most powerful self-imposed constraint on its own potential liberation’ (see Who wants to live under a system of Organised Paranoia?).

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Politics

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today