Donate

The end of the free internet is around the corner

The Online Safety Bill will subject a vast amount of previously legal content to state regulation.

Victoria Hewson

Topics Free Speech Politics Science & Tech UK

Want to read spiked ad-free? Become a spiked supporter.

Next week’s Queen’s Speech is expected to include the long-awaited Online Safety Bill. Plans for the bill were first set out under the May government’s 2019 Online Harms White Paper. Mainly, it proposed giving online platforms, including private-message providers, a ‘duty of care’ to keep users safe from both illegal and legal ‘harms’.

The scope of the ‘harms’ the White Paper was concerned with was staggering: disinformation, hate speech, electoral manipulation, microtargeted advertising, trolling, bullying, offensive speech, child abuse, self-harm, terrorism, cybercrime, poor sleep quality, harassment and more. Following consultation, the scope seems to have been rolled back a bit. But it has already been confirmed that Ofcom will be given the role of producing and enforcing codes of practice for online platforms, setting out how they must operate in order to prevent ‘harmful’ content from reaching users. Although we have not yet seen the legal drafting, failure to follow the codes of practice is likely to result in enormous fines and other penalties for the platforms.

In my paper More Harm than Good, published today, I explore the background to measures like the Online Safety Bill. Not too long ago, social media and online communications were viewed positively. This new technology was lauded for its potential to make huge contributions to our economy and society. It was argued that social media would make society more democratic, liberal and tolerant. Now it is widely believed, certainly among our governing elites, that democratic processes have been subverted by online disinformation and misinformation, and that children and even adults are at risk of psychological harm and exploitation from offensive or inappropriate online material. Digital platforms are portrayed as an unregulated ‘wild west’.

But in reality, the same laws already apply online as offline. In some respects, such as privacy, online activity is even more strictly regulated than it is offline. There have been legal battles for years over the state monitoring of online communications. And in 2018, the EU agreed a ‘code of practice’ on disinformation with the biggest tech companies, who agreed to filter and deprioritise content that was not considered to be trustworthy (as determined by state-backed fact checkers).

Even some on the side of free speech have argued for government regulation to counter the tech firms’ suppression of non-mainstream views. But all of this censorship is compatible with the kind of codes of practice that are currently on the table. Given the state’s priorities, it seems unlikely that more state intervention will lead to more liberal moderation policies. Equally, if the platforms thought that by signing up to voluntary codes they could head off potentially stricter regulation, they will be disappointed. The UK will pursue its online-harms agenda regardless, and the EU looks set to formalise its code of practice in its new Digital Services Act.

There are also calls for platforms to be treated as publishers and held liable for the content that users post. Supposedly this is to redress the balance with traditional media and to curtail the dominance of the digital giants. But we should be wary of going down this path, too. The current set-up is good for both users and the platforms. In the absence of this protection, the social-media platforms would filter content even more aggressively – especially anything edgy or potentially libellous – in order to protect themselves legally.

Content-regulation laws are strongly supported by MPs of all parties. They are pushed by child-protection charities, and cheered in the traditional media. But there is very little evidence that they will work to protect children and vulnerable people from abuse, or prevent crimes like grooming and people-trafficking. My research found that rather than social media and digital communications causing an increase in abuse and crime, they have brought ‘hyper-transparency’. Thanks to social media, we are much more aware of the abuses going on in society, which in turn creates a greater desire to do something about them.

More regulation won’t help. Governments would be better off giving the police the resources they need to investigate online criminality. They could also educate and support parents to attend to what their children are reading and watching online. Unfortunately, they would prefer to appoint the tech giants as the gatekeepers of our digital interactions.

The Online Safety Bill is an astonishingly ambitious piece of legislation. The drafters will have an enormous challenge in bringing some coherence to the vast number of amorphous objectives outlined in the White Paper. The government seems to think that IT and compliance professionals can simply design away online harm – without political bias and while protecting free speech. At the same time, it will task a regulator with monitoring the compliance of potentially every digital platform in the world. This ambition is about to be put to the test.

Victoria Hewson is head of regulatory affairs at the IEA and author of the new paper, More harm than good?.

Picture by: Getty.

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Free Speech Politics Science & Tech UK

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today