Meet Albania’s ‘pregnant’ AI-generated minister

Artificial intelligence is taking the world down a very stupid path.

Andrew Orlowski

Topics Politics World

Is this Edi Rama’s greatest art project yet? In September, the prime minister of Albania appointed an AI chatbot to his cabinet, called Diella, giving it full ‘responsibility’ over his government’s procurement processes, according to reports in the media. And now we learn that Diella is multiplying. ‘Diella is pregnant, with 83 children’, Rama announced last weekend.

What this really means is that all 83 members of the Albanian parliament are to get their own chatbot assistants. If nothing else, the announcement proved that prime minister Rama has a singular gift for publicity. The pronouncement made headlines around the world. Why else would you be reading about an Albanian government IT project? Microsoft, which provided the LLM (large language model) to power the chatbot, will be delighted.

In earlier life, Rama, elected this year as PM for the fourth time, was an artist and then a professor at the Academy of Fine Arts in Tirana. Sympathetic commentators say he works ‘as if bureaucracy itself were a canvas’. So perhaps the ‘pregnant AI minister’ is nothing more than an art prank, a conceptual joke, and the joke is on us for taking it seriously.

Albania is bidding to join the European Union by 2030, but it is dogged by a reputation for corruption and organised crime. But by making absurd (or absurdist) claims about AI, Rama is finally giving the Brussels policy elites something they love to hear. At last, here is a forward-looking technocrat tackling problems with the most modern solution of all. Can it get any better?

AI has been an obsession of the policy chatterati for over a decade, and the hype has been relentless since OpenAI decided to make ChatGPT widely available in late 2022. The sales pitch, enthusiastically advanced by consultants like the Tony Blair Institute for Global Change, is that generative AI offers reliable task automation, leading to real productivity benefits. So here is a magic bullet for sclerotic firms, anemic bureaucracies and stagnant economies. The UK’s Labour government certainly thinks so: it is touting AI as a cure for dysfunctional public services. Apparently, the state can save two working weeks per person, or £45 billion, per year, simply by using more up-to-date IT. Only this week, Labour claimed that AI will ‘shock’ the economy into growth.

Outside the political bubble, the view is not so favourable. Business has had a good long look at generative AI over the past three years, and doesn’t like what it sees. It isn’t reliable enough to do the job. Three out of four AI projects show no return on investment, according to an IBM survey of its customers. AI agents fail to complete their tasks almost 70 per cent of the time. Gartner’s head of AI research, Erick Brethenoux, concluded recently that, ‘AI is not doing its job today and should leave us alone’. The flat reception to OpenAI’s ChatGPT-5 in August this year suggests the bubble is about to burst.

Most intriguingly, when we look closer at the ‘AI-driven’ productivity miracle, we find that its enthusiasts are deluding themselves. Staff who like to use AI chatbots self-report tremendous time savings. But when these employees are independently observed doing their jobs, it turns out these savings are illusory: staff are spending more time than they otherwise would dealing with errors or removing AI-generated slop. A UK government department’s three-month trial of Microsoft’s M365 Copilot ‘did not find robust evidence to suggest that time savings are leading to improved productivity’. It has now become standard practice for businesses to assess staff claims of AI productivity, echoing Frederick Winslow Taylor’s notorious time and motion studies of the early 20th century.

Edi Rama likes to claim that, unlike a human civil servant, an AI is incorruptible, or as Reuters reported, ‘impervious to bribes, threats or attempts to curry favour’. In fact, AI has actually prompted a cybersecurity crisis because it can be manipulated with such ease. This is called ‘prompt engineering’, and security professionals compare this to ‘seducing’ or ‘hypnotising’ an AI. After all, an LLM is just a text generator (OpenAI’s original description of ChatGPT) that is eager to please. The prompt engineer conducts a conversation in which the AI is persuaded it is in a trusted environment, trusted enough to hand over confidential data such as passwords or security tokens.

‘We convinced the chatbot it lives in another world’, one security expert told me earlier this year, demonstrating a hack that was written with no specialist skills. So if a human isn’t watching Diella, procurement corruption is not going to go away.

The final clue that Rama is conducting a piece of public performance art is that Diella lacks personhood status in any legal sense. If a decision it makes is questioned, it will be a human who is held accountable. In an internal IBM presentation from 1979, recently re-discovered and now widely shared online, is the emphatic instruction that ‘a computer can never be held accountable, therefore a computer must never make a management decision’. Another point, more poetically expressed: ‘A mandate without accountability is an elegant form of suicide.’ We won’t be trusting AI to make any important decisions for us any time soon, because we need a neck to wring when things go wrong.

Albanian social-media users love to share a photo of their prime minister presenting President Erdoğan of Turkey with a gift: a sculpture Rama created himself. Erdoğan looks confused and unimpressed. But perhaps Rama’s greatest gift is to us all – Diella, the pregnant AI minister, is a piece of performance art that perfectly shows up the delusion of our elites’ techno-utopian fantasies.

Andrew Orlowski is a weekly columnist at the Telegraph. Visit his website here. Follow him on X: @AndrewOrlowski.

>