Donate

ChatGPT will never be ‘intelligent’

The worship of AI betrays a lack of confidence in humanity.

Norman Lewis

Topics Politics Science & Tech

It seems we are witnessing the birth of a new religion – the worship of artificial intelligence (AI). A slew of breathless commentary now suggests that AI is about to become more intelligent than humans, put us out of work and perhaps even threaten our very existence.

Most of the discussion around AI refers to large language models (LLMs), like OpenAI’s ChatGPT software – the latest model of which, GPT-4, came out last month. No doubt ChatGPT is an impressive technological achievement. We should welcome its considerable potential as an adjunct to human problem-solving. However, that doesn’t mean that ChatGPT represents a new form of ‘intelligence’.

The worship of AI is a mark of the low esteem in which human consciousness and agency are held today. The increasing obsession with artificial intelligence suggests a lack of faith in real human intelligence.

This misanthropy makes little sense. If humans really are so unintelligent, then AI can hardly be a solution. After all, AI only exists in the first place thanks to human intelligence and creativity.

It is important to dispel the myth that ChatGPT actually has ‘intelligence’. It is only a computer programme. It generates responses to human prompts based on training data and parameters determined by the AI engineers who built it. It does not understand the information it presents. Over time, its training data and its over 175 billion parameters are updated based on user interactions and other input sources, meaning its responses can improve. But these improvements are based on the actions of human AI engineers – ChatGPT is not ‘learning’, as the media like to claim.

Unlike humans, ChatGPT doesn’t experience emotions. Nor does it forge personal connections with the people it interacts with. It is stuck in an endless present, knowing only what is necessary in the moment of interaction to react to a given prompt. It has no memory, either.

There is no ‘thought’ taking place when ChatGPT responds to questions. Nor does it ‘read’ the text it spouts forth so impressively. This is a mechanical process that combines and orders words according to a human-made algorithm. This process of regurgitating old information means that, unlike a human being, ChatGPT cannot subvert accepted assumptions or orthodoxies. Nor does it acquire knowledge – its outputs become fine-tuned only because of the human beings who interact with it.

Indeed, this is an irony that many ChatGPT evangelists overlook – it is being refined by human intelligence, through crowdsourcing on an unprecedented scale. (The previous model, GPT-3, reached over 100million users in January, making it the fastest-growing consumer AI application to date.)

The language used to describe AI is a big part of why the myth of its ‘intelligence’ is so pervasive. ‘Information’ and ‘computation’ are used almost interchangeably with ‘knowledge’ and ‘consciousness’. Yet, as philosopher Raymond Tallis rightly argues, consciousness is not the same as computation. Brains do not simply process information like computers. The mind is not just software running on the hardware of the brain. Confusing the two leads to a crass determinism. It leads people to believe that thought has nothing to do with consciousness, and is merely the result of material processes in the brain. Believing that a computer, ‘the unconscious assistant of conscious human beings’, could one day do ‘conscious things like thinking’ is a notion Tallis dismisses as ‘daft’.

Another problem is the term ‘artificial intelligence’ itself. It implies there is some kind of intelligence inside the computer. Similarly, terms like ‘machine learning’, ‘feedback’ and ‘self-directed learning’ might give the impression that conscious self-improvement is taking place. But these are only metaphors. ChatGPT has no idea what it is ‘learning’ – indeed, it has no ideas at all.

The mistaken idea that artificial intelligence is in some way replicating human intelligence goes back to Alan Turing. The confusion stems from his seminal 1950 paper, ‘Computing Machinery and Intelligence’. His thesis was that if a machine’s textual ‘answers’ to questions persuade a human observer that it is a human being, then it is genuinely thinking. If it can successfully ape human communication and pass a ‘Turing test’, so the reasoning goes, then it must be able to think intelligently like a human.

In reality, if a chatbot passes the ‘Turing test’, this is not evidence that it is thinking. Indeed, to credit something like ChatGPT with any semblance of intelligence is to denigrate human intelligence. Doing so demonstrates both gullibility and misanthropy.

Take New York Times technology columnist Kevin Roose’s 10,000-word ‘conversation’ with Microsoft’s new Bing chatbot, Sydney. Sydney tells Roose that it loves him, that it has a desire to be destructive and that it wants to become human. As technology blogger Mike Solana points out, all Roose does is get Sydney to generate scary-sounding replies that reflect Roose’s own preoccupations. Roose, like so many other technology journalists who fret about the dangers of AI, wasn’t ‘talking’ to a chatbot at all. He was essentially just ‘making scary faces in a mirror and talking to himself’. When in ‘conversation’ with Roose, Sydney simply searched its library and the internet for examples of people in similar conversations. It then awkwardly regurgitated an approximation of these conversations to Roose, who proceeded to take umbrage, despite getting the exact outcome he should have expected.

It bears repeating: Sydney is not a person. Sydney is a search engine. Sometimes, Sydney sounds scary when it is prompted to say scary things. Above all, Sydney is a mirror. It is a mirror of its programmers’ belief system, a mirror of the person it is chatting with, and a mirror of the rest of us online – the internet’s collective thinking as reflected in articles and social-media posts, which Sydney reads, summarises and turns into responses.

Some fear that chatbots’ ability to mimic humans will mean they will be readily believed, and could thus propagate misinformation. This assumption also springs from a low view of human intelligence: that the appearance of intelligence will supposedly be so convincing that ordinary human beings will blindly and uncritically believe any old crap bots regurgitate.

This view of AI is just misanthropy in a new high-tech form. The credulous technology journalists who present AI as something to be feared clearly hold a rather dim view of human intelligence. The rest of us need not be so gloomy.

Dr Norman Lewis is managing director of Futures Diagnosis and a visiting research fellow of MCC Brussels.

Picture by: Getty.

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Politics Science & Tech

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today