Donate

AI hysteria is out of control

Experts’ alarmism over artificial intelligence could strangle this technology at birth.

Norman Lewis

Topics Politics Science & Tech UK USA World

The calls for artificial intelligence (AI) to face stringent regulation are getting louder by the day. Last week, AI researcher and tech entrepreneur Mustafa Suleyman and former Google CEO Eric Schmidt called for an international panel on AI safety to be formed to regulate the technology. They suggest that to make AI safe for the future, we should take ‘inspiration from the Intergovernmental Panel on Climate Change (IPCC)’.

What’s missing from the AI debate, Schmidt and Suleyman argue, ‘is an independent, expert-led body empowered to objectively inform governments about the current state of AI capabilities and make evidence-based predictions about what’s coming’. All this is apparently necessary because lawmakers do not have a basic understanding of what AI is, how fast it is developing and where the most significant risks lie.

Schmidt and Suleyman are correct to say that before AI can be managed appropriately, politicians need to know what exactly they are regulating and why. It is undoubtedly the case that confusion and uncertainty reign when it comes to public discussions over AI.

There are, however, two glaring problems with this proposal. First, today’s uncertainty and alarmism around AI is not mostly coming from laypeople – be that politicians or the public. Rather, it is the experts themselves who are making some of the most outlandish and misleading claims about AI. Second, an IPCC-like international panel on AI safety would not result in objective, impartial or well-informed outcomes. Instead, it would institutionalise a technocratic and fear-ridden narrative from which only terrible regulation would flow.

The apocalypticism of AI experts is both unhinged and self-serving. Take the example of Ian Hogarth, the recently appointed chair of the UK government’s Frontier AI Task Force, which will inform the forthcoming AI Safety Summit in Buckinghamshire, UK, next month.

Writing in the Financial Times, Hogarth explains the dangers of what he terms the ‘race to God-like AI’. Apparently, we should all fear the arrival of a ‘superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it’. Although he concedes that ‘we are not there yet’, he then argues that ‘the nature of the technology means it is exceptionally difficult to predict exactly when we will get there. God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race.’

Really? The ‘destruction of the human race’? Clearly, we do have a lot to worry about if this is the kind of expert advice that our policymakers are getting. The imaginative leap from the Large Language Models like ChatGPT that we have now to ‘superintelligent’ computers – ‘God-like’ artificial general intelligence – is breathtaking. There isn’t a computer anywhere on Earth that ‘understands its environment’ and that can, without supervision, ‘transform the world around it’.

What Hogarth imagines is a machine with agency. Today’s AI is a million miles away from such capability and whether it can ever get there at all is debatable. Nevertheless, Hogarth is suggesting that, just in case, we should act today as if we were just on the cusp of unleashing this potential nightmare.

If you followed Hogarth’s logic, then surely governments should ban any further AI development and institute enormous fines and prison sentences for anyone who continues this apparently dangerous work. After all, we are talking about the end of the human race and life as we know it.

Of course, Hogarth and many other experts are not advocating the end of AI research. What they are doing instead is fuelling an apocalyptic narrative that justifies a precautionary approach to regulation – an approach that will come to rely entirely upon their expertise.

This self-serving behaviour from so-called experts is genuinely worrying. And Hogarth isn’t the only example. Earlier this month, Politico revealed that an organisation tied to leading AI firms was funding the salaries of more than a dozen AI fellows in vital congressional offices, across federal agencies and at influential think-tanks in the US. These fellows are already involved in negotiations around future AI regulation. And they have helped to put fears of an AI apocalypse at the top of the US policy agenda.

Their proposed solution to this non-existent apocalypse is to bring in licences for companies to work on advanced AI. The real purpose of this is to secure the status quo and help lock in the advantages enjoyed by the existing tech giants. Worryingly, Politico has also revealed a similar influence in the UK. The main focus of Rishi Sunak’s upcoming AI Safety Summit will be ‘existential risk’.

Whether these AI experts truly believe in this fantasy is almost irrelevant. Their self-serving fear-mongering has created an apocalyptic atmosphere in which the strict regulation of AI is now a foregone conclusion. In effect, it has shut down the AI debate before it even managed to get started.

This is why Suleyman and Schmidt’s suggestion of an IPCC equivalent to oversee AI safety should not be taken lightly. Suleyman and Schmidt are rightly concerned that effective, sensible AI regulation will only happen if ‘trust, knowledge, expertise, [and] impartiality’ exist. And they are right to highlight that we lack these at present. But the idea that something like the IPCC can do this is naïve and foolhardy.

After all, the IPCC was not created to add to the world’s scientific knowledge of climate change. On the contrary, it was created to be a debate-ender, a gateway to shut out anyone who disagreed with the underlying political agenda of climate-change catastrophists. Inevitably, establishing a similar body for AI would have the same result of giving weight to the most fear-mongering voices.

An international AI panel would remove the debate away from the public sphere and place it in the private corridors of Big Tech and government agencies. Just as the IPCC has shielded climate policy from democratic contestation, all the signs suggest we should soon expect the same for AI.

Insulation from public accountability is a recipe for bad policymaking. Yet this is precisely what the experts, Big Tech and governments are aiming for. The prejudice that only true experts – or those developing AI today – can know and understand what is at stake could have dire consequences. It means that today’s flawed thinking about AI will likely shape the regulatory environment for the foreseeable future.

AI has the potential to be a transformative technology. Perhaps it will one day change our lives just as the automobile or electricity did in the past. But, right now, we don’t even know what AI can or cannot do for us yet. If we end up regulating AI on the basis of apocalyptic scenarios, we will surely squander its potential or, at the very least, leave the possible gains in the hands of a tiny clique of tech firms. Those experts who are whipping up AI hysteria will then have a lot to answer for.

Dr Norman Lewis is a writer and visiting research fellow at MCC Brussels. His Substack is What a Piece of Worlk is Man! He will be speaking at the Battle of Ideas Festival in London, in the discussion ‘Terminator or Tech Hype? AI and the Apocalypse’ on Saturday, 28 October. Find out more here.

Picture by: Getty.

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Politics Science & Tech UK USA World

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today