AI is not nearly as sophisticated as you think it is
Our ignorant MPs seem to think ChatGPT is on the verge of enslaving the human race.
Westminster gifted us with an extraordinary moment last month that illustrated the impressive levels of tech ignorance among our MPs.
As part of the Science and Technology Committee’s ongoing inquiry into how we should regulate artificial intelligence, Conservative committee member Katherine Fletcher wanted to share her concerns. She is worried about what she called ‘long-term planning algorithms that are going to be smarter than us’. She said that having first been ‘set a mission’ these algorithms could then ‘decide the best decision is to remove all cows from the planet, or all humans from the planet’. And once these ‘long-term planning algorithms’ have started, she said, we may be unable to stop them.
She even likened AI to a biological lifeform. ‘We are about to use technology to create something that has an interest in surviving, growing and thriving that we cannot necessarily read’, she said. ‘Short of pulling every server out of the wall in the globe’, she continued, ‘how [do we] stop a very well-fuelled future-planning algorithm that has been running a long time?’.
Scary stuff. Fletcher had evidently given this much thought.
The three witnesses – policy executives from Google, Microsoft and BT – looked completely stunned. As well they might. ‘I don’t know what a long-term planning algorithm is’, replied Hugh Milward, general manager for Microsoft’s UK legal team. ‘It is certainly not something we are developing at Microsoft.’
Their confusion is understandable. First, no artificial-intelligence models are deployed today for ‘long-term planning’, nor would they be even remotely capable of it. For instance, OpenAI’s ChatGPT can perform, with moderate success, simple, routine tasks like summarising, predicting or translating text – not planning the future of humanity. And, in any case, whatever AI is used for, no one is forced to care about the answers it provides. The machine follows our instructions, not the other way around.
What’s more, Fletcher’s analogy with biology is total nonsense. An AI model is not capable of replicating itself from server to server, as computer viruses and malware can. Malware replicates because it is tiny and easy to camouflage. AI models, meanwhile, are not designed to replicate themselves, meaning an AI programme certainly isn’t capable of ‘growing’, as Fletcher speculates. Even if it could, such programmes are behemoths that require vast computational resources, hosted on a server. So any server operator would certainly notice if something so computationally intensive arrived by itself, unbidden, and could always turn it off. So Fletcher was generating a wild fiction that really should have remained a showerthought.
What Fletcher’s flight of dystopian fancy vividly illustrates is the grip that AI mythology now has on our political class, and the policy wonks they depend on. Last year, I argued on spiked that there are two ‘modern religions that eclipse wokeness in their scope and ambition: environmentalism and “artificial intelligence”’. In one of these, humankind is tainted by original sin, having offended the one true deity that matters – nature. At the other end is transhumanism, which seeks to perfect humans by unifying our ‘algorithms’ with those of a superintelligent computer. As I wrote last year, a ‘belief in the transformative power of AI’ has ‘penetrated the policy, media and administrative classes’.
This means that, for those in parliament and Whitehall, AI is a high-status subject on which it is mandatory to ‘have a view’. Speculating about AI has become a competitive sport, and Fletcher’s ‘long-term planning algorithm’ was her bid. Indeed, according to the government’s 2021 National AI Strategy, ‘artificial-intelligence technologies generate billions for the economy and improve our lives’.
But is AI really so transformative? Recent evidence suggests not. Last month, Google’s demonstration of its large language AI model, intended to augment a Google search, went so badly it wiped over $100 billion from the share price of its parent company, Alphabet. Meanwhile, Microsoft’s Bing chatbot, based on OpenAI’s ChatGPT model, is not doing much better. In early tests last month, it hallucinated false answers, and even abused and insulted its interlocutors.
The Science and Technology Committee has already heard from Michael Osborne, professor of machine learning at Oxford, who explained that today’s best AI programmes fail at problems a simple child can solve. Doctoral student Michael Cohen told the committee that he could think of no examples ‘where AI has been deployed to replace people, as opposed to augmenting what they do’.
Still, what hope does an ambitious twentysomething policy wonk or civil servant have if they don’t have an opinion on some future Artificial General Intelligence? Highflyers like Google bigwig Eric Schmidt and his reliable sidekick, Henry Kissinger, regularly gaslight the policy world by talking up the possibilities of AI. Last month, they penned a joint op-ed in the Wall Street Journal, titled ‘ChatGPT heralds an intellectual revolution’. So, for someone climbing the greasy pole, if you don’t have an opinion on this subject, that’s a problem. You must play the game.
So long as politicians appear powerless to change the things that voters most care about, they will find such speculation preferable. Today, our political class is unable to control the borders, fix dysfunctional public services, catch or imprison criminals, or build anything at all. As TS Eliot wrote, ‘Human kind cannot bear very much reality’. The political class can’t seem to bear the reality of its ineffectiveness, or the fact that the so-called AI revolution before us really isn’t working very well. At all. Wild speculation is their refuge from that reality.
Picture by: House of Commons.
To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.