The public has been shut out of the AI debate
The mad dash to limit artificial intelligence has proved a major boon for the technocratic elites.
The first ever Artificial Intelligence Safety Summit was not exactly a roaring success.
Earlier this month, UK prime minister Rishi Sunak brought together 28 nations and the EU to discuss the risks of AI at Bletchley Park in Buckinghamshire. The government had clearly hoped to use the summit to position itself as some sort of world leader in AI regulation. It used it to launch something called the AI Safety Institute, and published something called the Bletchley Declaration, which declared that ‘AI should be designed, developed, deployed and used in a manner that is safe… human-centric, trustworthy and responsible’.
Unfortunately for Sunak, his moment in the sun was overshadowed by the big AI news from the US. Just days before the summit kicked off, President Biden had signed a rushed executive order on ‘safe, secure and trustworthy AI’. This represents the most comprehensive attempt to date to regulate the world’s biggest AI companies. And it will undoubtedly prove far more consequential than the toothless Bletchley Declaration.
The US law will mean that any businesses in the US developing AI models that could pose a serious risk to national security, economic security or public health will have to notify the government when training these AI systems and share their safety-test results.
While the Bletchley summit was underway, US vice-president Kamala Harris held a press conference to spell out the purpose of the executive order. ‘Let us be clear, when it comes to AI, America is a global leader’, she stated. ‘It is American companies that lead the world in AI innovation. It is America that can catalyse global action and build global consensus in a way that no other country can.’ The message was clear: the US will write the rules of the AI game, whether the rest of us like it or not.
In this rush to regulate AI, democracy is being sidelined. Biden’s executive order is equivalent to a monarchical decree. He even managed to bypass Congress by invoking the Defense Production Act, a Cold War-era law that gives presidents emergency authority to control domestic industries.
We see precisely the same undemocratic tendencies in the EU when it comes to AI. Lacking its own significant AI industry, the EU is attempting to become the global regulator of everyone else’s. The EU’s Artificial Intelligence Act – which is set to become law next year – will be the world’s first attempt to impose safety guardrails on generative AI, like ChatGPT. In this case, the European Commission is attempting to regulate AI without recourse to democratic controls or accountability.
This regulation would not stop at the EU’s own borders. Just look at past EU laws governing the internet. Its General Data Protection Regulation (GDPR), the world’s toughest privacy and security legislation, has become a de facto global standard for online businesses. And the 2022 Digital Services Act (DSA), a law regulating online content, has had far-reaching consequences for social-media companies across the world.
Indeed, Thierry Breton, commissioner for the EU’s internal market, made his intentions abundantly clear earlier this year. On a visit to Silicon Valley to oversee tech giants’ compliance with EU content rules, Breton declared: ‘I am the enforcer. I represent the law, which is the will of the state and the people.’
At least Biden was elected by US citizens. Breton can’t say the same. Yet there he was, attempting to decide the future of AI on the world’s behalf.
No doubt some would argue that the complexity of AI is beyond the grasp of ordinary people. That the great and the good in government, academia and Silicon Valley should be left to get on with regulating AI on our behalf. Yet it seems that Biden’s sudden zeal for AI regulation was actually sparked during a weekend in which he watched the latest Mission Impossible film. Clearly, we are not in the safest of hands.
Interestingly, during the UK’s AI Safety Summit, numerous roundtable discussions paid lip service to the need for public involvement in future AI regulation. The problem, however, is that the current agenda and its frame of reference have already been set without any public consultation. If they had been, we might see more attention being paid to solving the immediate and practical problems of AI, such as the potential for AI to cause job losses in certain sectors, to create false information, to make mistakes in facial-recognition software, or to see tumours that aren’t there in cancer screenings. Instead, we see the elites indulging in an implausible fantasy about AI developing superintelligence and taking over the world, à la Mission Impossible.
For all the fearmongering of the supposed experts, AI is not actually threatening the future of mankind. The real danger comes from our safety-obsessed, technocratic elite, who are increasingly removed from any democratic accountability or oversight. Now more than ever, we should be asking: who will regulate the regulators?
Dr Norman Lewis is managing director of Futures Diagnosis and a visiting research fellow of MCC Brussels.
Picture by: Getty.
To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.