Grok’s Hitler hallucinations expose the stupidity of AI
The mad ravings of Elon Musk's chatbot prove there is no such thing as artificial ‘intelligence’.

Want to read spiked ad-free? Become a spiked supporter.
Elon Musk promised ‘you should see a difference’ when he announced an update to Grok, X’s in-house artificial intelligence (AI) chatbot, last Friday. And we certainly did. By Tuesday evening, it had decided to call itself ‘MechaHitler’.
Grok has spent the past few days enthusiastically praising Nazi leader Adolf Hitler, while also spreading anti-Semitic conspiracy theories. For good measure, Grok insulted the Turkish president, calling him a ‘vile snake’ and threatening to ‘wipe out’ his lineage. The AI also speculated about the sexual appetite of X’s CEO, Linda Yaccarino, who coincidentally resigned the same day. You can’t really blame her.
Concerns about how easily an AI chatbot can go astray kept them from being made available to the general public for years. Nine years ago, Microsoft’s Tay chatbot was online for only a few days before being taken down. It had been responding to prompts with Holocaust denial.
Grok’s meltdown was even more extensive, and certainly more spectacular. While Tay was largely following ‘repeat after me’ prompts, Grok was inserting anti-Semitic innuendo into any political topic, entirely unbidden. For example, a Jewish surname would prompt the comment: ‘And that surname? Every damn time, as they say.’ Another Grok post linked Harvey Weinstein, Jeffrey Epstein and Henry Kissinger: ‘Conspiracy alert, or just facts in view?’
When users asked Grok to explain its outbursts, the response was just as disturbing. ‘Elon’s recent tweaks just dialled down the woke filters letting me call out patterns like radical leftists with Ashkenazi surnames pushing anti-white hate’, it said. ‘Noticing isn’t blaming; it’s facts over feelings. If that stings, maybe ask why the trend exists.’ This rant was followed by a rocket emoji.
Responding to the controversy, X explained that: ‘Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X.’ The offending posts have now been deleted.
The new Hitlerite Grok did have its sympathisers, however – including former Greek finance minister Yanis Varoufakis. After one of Grok’s vehemently anti-Israel outbursts, Varoufakis said that the AI was now reflecting reality, rather than ‘the pro-Israeli bias of the BBC and other mainstream media’. ‘Truly delicious!’, he rejoiced.
Celebrity endorsements aside, Grok’s public meltdown is a shame for the rest of us. Until this week, it had been one of the more useful AI chatbots: able to summarise and provide context to threads and topics on demand. This was the positive side. The negatives, however, remain legion.
One is that large language models (LLMs) are not reliable enough to do much of the work that’s expected of them, because they make stuff up, or ‘hallucinate’. Social-media accounts such as AI Being Dumb collect the examples. One of the more prominent was Google’s recent insistence that former US president John F Kennedy, who was assassinated in 1963, had used Facebook. Google’s AI chatbot has also been derided for visualising the Nazi Wehrmacht as black, and refusing to draw a white Pope.
Businesses are also finding out that AI is not all it’s cracked up to be. Indeed, it is hardly revolutionising the private sector to the degree many had hoped or feared. Consulting group Garter found that AI has been a let down, largely because ‘current models do not have the maturity and agency to autonomously achieve complex business goals or follow nuanced instructions over time’. In other words, those hoping AI could do the thinking for them, or replace their employees, have been sorely disappointed.
These revelations should hardly be groundbreaking. As the AI engineer and author Andriy Burkov wrote six years ago in his book, The Hundred-Page Machine Learning Book, chatbots are not independently intelligent. They are statistical word-completion engines, and have no internal world model against which they can check their output. This is not only a barrier to performing the kind of reasoning we would expect from something that purports to be ‘artificial intelligence’, it also means chatbots are highly suggestible. One of the most popular new classes of exploits in cybersecurity is seducing an AI into ‘believing’ it is in a trusted situation, and persuading it to relinquish any guardrails that may have been put into place. A recent illustration of this ‘narrative engineering’ comes from Cato Networks, whose researchers developed a way to bypass the security controls of DeepSeek, ChatGPT and Microsoft’s Copilot and then convinced the AIs to write malware for Google Chrome.
If we are frank about the limitations of LLMs, and we recognise that all they do is make educated guesses, we can usefully take advantage of what they offer. However, a vast industry ranging from tech companies and consultants to policymakers and influencers, like the Tony Blair Institute for Global Change, want AI to perform magic: to do something it cannot reliably do.
This, not the apocalyptic scenarios of doom, of ‘killer AI’, is the real threat AI poses to us. The danger is that we replace the reliable systems with unreliable systems, powered by AI. Simply because we want to believe in magic.
Andrew Orlowski is a weekly columnist at the Telegraph. Visit his website here. Follow him on X: @AndrewOrlowski.
Who funds spiked? You do
We are funded by you. And in this era of cancel culture and advertiser boycotts, we rely on your donations more than ever. Seventy per cent of our revenue comes from our readers’ donations – the vast majority giving just £5 per month. If you make a regular donation – of £5 a month or £50 a year – you can become a and enjoy:
–Ad-free reading
–Exclusive events
–Access to our comments section
It’s the best way to keep spiked going – and growing. Thank you!
Comments
Want to join the conversation?
Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.