Why are the elites suddenly so terrified about AI?

The sci-fi fantasy of killer computers consumed the Westminster bubble in 2023.

Andrew Orlowski

Topics Politics Science & Tech UK

In 2023, the policy elites became immersed in a giant work of collaborative science fiction. Both the White House and Whitehall are now gripped by fear of a technology that doesn’t exist and may never exist – namely, a form of god-like artificial intelligence, or artificial general intelligence (AGI).

Deftly weaving themselves into the story, politicians acted this year as if they had swooped in early to save the day. By holding AI summits and drawing up new AI regulations, they claimed to be steering humanity away from catastrophe. Outside, in the real world, potholes proliferate across the UK, NHS waiting lists grow longer and we no longer generate enough electricity domestically to keep the lights on. But at least the future will apparently be safe from killer computers.

It is no exaggeration to say our politicians have bought into a fiction. You could see this back in March, when a member of the House of Commons Science and Technology Committee, Tory MP Katherine Fletcher, embarked on a very strange flight of fancy. She speculated about how a sentient and invincible computer may one day decide to kill every cow on the planet. Yes, you read that right.

‘We are about to use technology to create something that has an interest in surviving, growing and thriving that we cannot necessarily [stop]… short of pulling every server out of the wall. How [do we] stop a very clever, well-fuelled future-planning algorithm that has been running a long time?’, she asked three baffled-looking representatives of BT, Microsoft and Google. She was describing a machine that could apparently discover and exploit new ways of replicating itself, but could also not survive being disconnected from the mains.

There was also the very clever Matthew Clifford, hand-picked to chair the new research agency, ARIA – the flagship of the UK government’s post-2019 science and technology agenda. He is also a science adviser to UK prime minister Rishi Sunak. We have ‘two years to save the world’ from AI, Clifford warned back in June, a statement that made the front page of The Times. His concerns were echoed by Sunak, who stated in the autumn that the fictional, not-yet-invented technology of AGI could create existential threats ‘on a scale like pandemics and nuclear war’.

Speculation about killer AI is a bit like QAnon for posh people. It is a collaborative metafiction, where people compete to envision ever wilder doomsday scenarios involving AI. But unlike the popular conspiracy theory for Trumpist proles, fretting over AI is a marker of very high status. As one Whitehall insider told me, you can kiss goodbye to a career in SW1 if you point out the problems with the now dominant narrative – if you dare to question whether we really have made very much progress in artificial intelligence.

You are expected to believe that AI could soon pose an existential threat to humanity. And that large-language models (LLMs), like ChatGPT, are such a technological marvel that they could soon replace white-collar jobs – including lawyers, journalists and doctors. Never mind that these LLMs regularly ‘hallucinate’ and invent fake facts. Indeed, Stanford research shows that in real-world clinical situations ChatGPT4 is wrong 59 per cent of the time, and gives potentially lethal advice seven per cent of the time. But such concerns are readily dismissed.

Thought experiments about a Terminator AI might be useful in undergraduate classrooms, or as company team-building exercises, but should they really be the stuff of top-level policymaking? Should the supposedly grave seriousness of it all really be placed beyond question and ridicule? Making fun of the Terminator AI hysteria now has the same consequences for your career as questioning ‘existential’ climate change.

As the year closes, many politicians remain unaware of the extent to which they’ve been recruited into a strange elite cult. Back in September 2021, the UK’s National AI Strategy dealt with the question of existential risk from AI in just three paragraphs, on the final page of a 62-page document. It just wasn’t very likely to harm us, the government sensibly concluded back then. Since then, urged on by tech-bro advisers, the government has found the apocalyptic warnings too compelling to resist. By June 2023, Terminator AI had apparently become the most pressing issue of the day. In November, the UK hosted the first global AI Safety Summit. Politicians have become actors in a drama invented by followers of cranky, esoteric fringe ideologies. The weirdos have captured Westminster.

These weirdos were given a clumsy name by philosopher Émile P Torres and researcher Timnit Gebru, who coined the term the ‘TESCREAL’. This refers to a spectrum of techno-utopian beliefs – namely, transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and long-termism.

Each of these small subcultures would have languished in obscurity had they not enjoyed huge influxes of cash over the past decade from wealthy Silicon Valley billionaires, such as Facebook co-founder Dustin Moskovitz and convicted crypto-fraudster Sam Bankman-Fried. This money bought respectability, in the form of operations such as Oxford University’s Future of Humanity Institute, where ‘long-termist’ Nick Bostrom resides as a philosopher king.

In particular, the effective-altruism (EA) movement has fuelled the hype around AI. Wealthy EA supporters say they want to ‘find the best ways to help others and put them into practice’, and attune their philanthropic efforts accordingly. And in recent years this nerdy utilitarian movement has become obsessed with AI. As author Nirit Weiss-Blatt explains, the policy obsession with AI soon followed, lubricated by some $500million of effective altruists’ money.

Not all EA enthusiasts are pessimistic about AI. In the policy world and in tech circles, there appear to be two competing camps. Some take an accelerationist view of AI and treat it as the potential solution to all ills. Others are far more worried about its risks. But they all share the eschatological certainty that a god-like AGI, promising a future of abundance, will be created sooner or later. They only disagree about the likelihood of it malfunctioning and destroying humanity.

The money available to these cultists has been generous and plentiful. And now they have the ear of Western political elites – filling their heads with giddy notions of AI taking over the world, for good and ill.

The most astonishing tech story of 2023 wasn’t advances in artificial intelligence, but the pervasive and totalising effect that AI mythologies have had on our media and policy elites. By allowing themselves to be taken in by the weirdos, our political class have relinquished the right to ever be taken seriously again.

Andrew Orlowski is a weekly columnist at the Telegraph. Visit his website here. Follow him on Twitter: @AndrewOrlowski.

Picture by: Getty.

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Politics Science & Tech UK


Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today