Donate

2026 will be the year the AI bubble bursts

Western elites have pinned their economic hopes on a fanciful silver bullet. The coming crash will not be pretty.

Andrew Orlowski

Topics Politics Science & Tech UK USA

Want unlimited, ad-free access? Become a spiked supporter.

The two great faiths of our elites, artificial intelligence and apocalyptic climate change, took a beating in 2025. Both are wildly speculative, both require a great amount of faith, and both have been treated as reasons to undertake a dramatic re-ordering of society. While popular support for Net Zero has evaporated slowly, AI’s descent has been more dramatic.

At the start of last year, the UK’s then technology secretary, Peter Kyle, typified highbrow policy sentiment towards this new so-called miracle technology. ‘This is a once-in-a-generation opportunity’, he said in February. ‘If we can seize it, we will close the door on a decade of slow growth and stagnant productivity.’ A few months later, he shared the view that ‘by the end of the parliament we’re going to be knocking on artificial general intelligence’ – that is, an AI with human-level cognitive abilities. This is a government of true believers.

‘Agentic AI’ was the buzzword back then. It means allowing large language models (LLMs) to act autonomously, therefore becoming part of the machinery of business. Yet by the autumn, executives at OpenAI – arguably the world’s biggest AI company – were fishing for government bailouts. In December, Microsoft, which had stuffed its Copilot AI into every corner of every product, cut its sales targets ‘because almost nobody’ had used its new software. And the automation miracle? By the end of 2025, Wall Street Journal staff reported that a vending machine in their newsroom, which ran on generative AI, had given away free PlayStations and ordered guns and live fish. It ‘taught us lessons about the future of AI agents’, one of its reporters wrote.

The deficiencies of AI haven’t always been humorous. Here in the UK, both the West Midlands Police and the Scottish courts used it to disastrous effects. In November, police refused to allow the predominantly Jewish Israeli supporters of Maccabi Tel Aviv to attend a Champions League match against Aston Villa. The police report used to justify the ban was compiled with the help of AI – it referenced allegations of fan violence at past Maccabi games that had never been played. In December, a judge in a Scottish employment tribunal was suspected of using AI in his ruling on nurse Sandie Peggie’s harassment case against NHS Fife. The judgement contained multiple fictitious quotes, most likely ‘hallucinated’ by AI.

The list of failures goes on. An ethics guide to AI, published by one of the world’s largest academic publishers, was found to be full of made-up citations. Deloitte delivered a report with AI-generated text to the Australian government riddled with AI-generated errors, forcing the company to agree to a $440,000 AUD refund. It then repeated the trick in Canada – or at least tried to. Widespread doubts were confirmed with the release of ChatGPT-5 in August – far more expensive than its predecessor, but widely considered to be underwhelming. It still couldn’t do basic arithmetic, logic puzzles or read analogue clocks.

Enjoying spiked?

Why not make an instant, one-off donation?

We are funded by you. Thank you!

Please wait...
Thank you!

A vibe shift took place in late spring. Prior to that, high-status opinion had determined that AI was both transformative and inevitable, and dissenting from that view in public policy or corporate strategy had negative career consequences.

But then businesses began to report pilot projects they had been conducting in recent years. And the bad results trickled in. Three out of four AI projects failed, an IBM survey of 2,000 businesses found. AI ‘agents’, such as Google’s Gemini, failed 65 to 70 per cent of office tasks, according to a survey by Carnegie Mellon University. Erick Brethenoux, head of AI research at global advisory firm Gartner, reached the conclusion that ‘AI is not doing its job today and should leave us alone’. Even AI-generated movies, arguably its most impressive feat, are problematic, coming over as cynical, artificial and icky. In November, Coca-Cola apologised for an advertisement created using generative AI after just a few days of intense backlash.

As humans, we’re prone to anthropomorphism. AI looked a little magical. People wanted to believe in the illusion that this technology is ‘intelligent’. So this is not what we were promised – and it is certainly not a technological advance on a par with the internet, let alone fire or electricity, as some of its backers like to claim.

One of the biggest problems with LLMs is the way they have been built. US AI firms bet everything on more computing power raising the quality, rather than clever engineering. China’s first significant large language model, DeepSeek, cost a fraction of its American competitors. Subsequent models have shown even more impressive optimisation gains, made through simple, good engineering decisions. In many ways, it is a repeat of what happened in the car industry in the 1970s and the 1980s: American gas guzzlers, in this case companies like OpenAI, are being out-performed by nimbler and more practical Asian models.

As 2025 ended, it was clear how vulnerable the global economy had become thanks to the AI delusion. Harvard economist Jason Furman pointed out that the first half of 2025’s GDP growth in the US was almost entirely attributable to the AI bubble. In order for this growth to be sustained, it would require huge leaps in business productivity, which just isn’t happening.

Now we also know that the bubble is being kept inflated by an incestuous web of circular deals. ‘Many of these deals are financial loops, not true indicators of growth’, observed IBM’s Ayal Steinberg in November. This prompted investor Michael Burry, whose research on the mortgage market was dramatised in The Big Short, to bet heavily against the market. Burry isn’t the only one: in November, billionaire venture capitalist Peter Thiel and Softbank’s Masayoshi Son both dumped their shares in chipmaker Nvidia, whose hardware is used to run LLMs.

It’s all rotten. With 20 per cent of US wealth tied up in the bubble, the fallout promises to be spectacular. Strap yourself in for 2026 – the year the AI bubble will almost certainly burst.

Andrew Orlowski is a weekly columnist at the Telegraph. Visit his website here. Follow him on X: @AndrewOrlowski.

Monthly limit reached

You’ve read 3 free articles this month.
Support spiked and get unlimited access.

Support
or
Already a supporter? Log in now:

Help us hit our 1% target

spiked is funded by readers like you. It’s your generosity that keeps us fearless and independent.

Only 0.1% of our regular readers currently support spiked. If just 1% gave, we could grow our team – and step up the fight for free speech and democracy right when it matters most.

Join today from £5/month (£50/year) and get unlimited, ad-free access, bonus content, exclusive events and more – all while helping to keep spiked saying the unsayable.

Monthly support makes the biggest difference. Thank you.

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today