Donate

Censorship by algorithm

Jeffrey Rosen on Silicon Valley’s war on free speech.

spiked

Topics Covid-19 Free Speech Politics Science & Tech USA

Tech firms and social-media platforms have more influence over our lives than ever before. And they play an increasingly active role in policing political debate. Every passing month brings more online purges and suspensions of the accounts of ‘controversial’ or ‘offensive’ people and organisations, even including the president of the US. The rise of internet censorship poses a huge threat to free speech.

Jeffrey Rosen is an American academic lawyer, commentator and author. He is the president and CEO of the National Constitution Center in Philadelphia, and host of the weekly podcast, We the People. He joined spiked editor Brendan O’Neill for the latest episode of The Brendan O’Neill Show. What follows is an edited extract from their conversation. Listen to the full episode here.

Brendan O’Neill: You have written extensively about the role of Big Tech in relation to public discussion, and what we are and are not allowed to say in the online sphere. There have been numerous examples over the past few months of companies such as YouTube removing things about Covid-19 which are clearly conspiratorial or wrong. But YouTube has also removed examples of sceptical doctors saying the virus is not as bad as was first thought, or the lockdown was not the right response – things we would consider to be within the bounds of legitimate public discussion. There seems to have been an element of overreach in how some social-media companies have approached the discussion of Covid-19. Have you noticed that? And do you think it is a continuation of the role that some of these tech overlords had started to play over the past few years?

Jeffrey Rosen: Yes – there has been a notable and worrying shift in the response of tech platforms to free-speech controversies during Covid. Five years ago, there was a broad, if embattled, effort to embrace something like American free-speech standards, and to put the finger on the side of free speech when it came to controversies involving hate speech. The Trump presidency changed that. The platforms began to be more active in making judgements of facts, either fact-checking or removing statements by the president and others that they felt were false. Then, in the Covid crisis, that extended to removing or questioning opinions regarding Covid, on the grounds that they were factually questionable. This is a worrying trend, because once the platforms put themselves in the business of deciding what is true and what is false, they are serving as editors, and they have made clear that they are not going to be bound by traditional First Amendment standards. Instead, they will balance values of free speech against other values. That is not a good path to go down.

O’Neill: You have written before about how big companies like Google and Facebook have an extraordinary amount of control and influence over our privacy and freedom of speech. You have said that, in many ways, they have greater power in those areas of life than any king, president or Supreme Court justice. How do you conceive of that power and how do you see it being exercised at the moment?

Rosen: The power is plenary: the platforms can each set their own free-speech standards, and in that sense, they are governments that are administering internally adopted constitutions. At the dawn of the tech era in the early 21st century, the platforms had basically tried to embrace a version of traditional free-speech standards. The American First Amendment standard is extraordinarily speech-protective. It says that speech cannot be banned or restricted unless it is intended to and likely to cause imminent violence. It’s a standard that comes from a Supreme Court case in the 1960s called Brandenburg vs Ohio. It was articulated by justices Holmes and Brandeis in the 1920s. It’s more protective than the laws in almost any other country in the world, certainly including Britain. But that is no longer the standard that the companies are tending to apply. Each of the platforms has adopted its own hate-speech policies, which forbid speech that offends or degrades on the basis of protected status. And the exceptions are only growing as the consumer pressures to remove hate speech continue, and also legitimate concerns about the spread of false facts and fake news grow. We are living in a world where these organisations are unrestricted by legal norms, since they are private companies and thus are not bound by the Constitution. They are also extremely responsive to consumer norms, which means that all of the pressures are going to be in favour of restriction rather than freedom.

O’Neill: Absolutely. It strikes me that many people seem willing to entrust their privacy, liberty and right to say certain things to these corporations. They seem prepared to accept the submission of their private lives to these platforms, and to pressure the platforms into controlling more and more areas of speech. But people in the past would have been quite sceptical of allowing the government, for example, to have so much oversight in relation to our private lives and our right to express ourselves. What explains that shift?

Rosen: Perhaps the answer has something to do with how, online, people think of themselves more as consumers than citizens. They favour efficiency, they like the ease of use of these platforms, and as long as their needs are being met, they are less likely to be concerned about the needs and rights of minorities and about constitutional principles. Also, users feel empowered to set the standards themselves. YouTube, Google and Facebook enforce their hate-speech policies based on user complaints. Individuals write in and say they are offended by something.

It is a sign of the times that Facebook recently announced the equivalent of a supreme court for Facebook, where it has assembled a group of academics and heads of NGOs to make decisions about what should be allowed in terms of speech. It shows that Facebook itself is uncomfortable with the power it is exercising not so much because it minds exercising power, but because it has got political heat for the decisions it has made, and wants some cover. It wants independent authorities to make the decisions for it. We will see how this new body operates. Certainly, the people who are on it are thoughtful, responsible people. But creating a supreme court for Facebook raises the same challenges as creating judicial bodies in the constitutional space. What is the degree of independence, transparency and accountability? Will the Facebook judges justify their decisions with written opinions? What, if anything, is the ground for appeal? And what pressures are these judges under to ensure independence? An unaccountable supreme court is just as troubling as an unaccountable platform.

O’Neill: One of the things that frightens me most about the creeping role of social-media censorship is that it is almost censorship by robot. The algorithmic approach is used to tackle the fact that these platforms receive millions upon millions of posts every week and millions of complaints, too. My Instagram page, for example, has been taken down twice, and I think that is largely down to an algorithm detecting a word I said, or an idea I expressed, which went through the computer and set off an alarm. It makes it even more sinister in some ways, or certainly more dystopian, because it’s not even a human being exercising moral judgement and telling you that you cannot say something, which is bad enough. Instead, it’s a machine-like approach to something as complex and important as human speech and human engagement.

Rosen: Yes, absolutely. As you say, censorship by algorithm is dystopian. You understand the impulse – the volume is so big, there are billions of pieces of content exchanged on Facebook, and not a lot of human beings able to make discretionary decisions. But hard questions involving the balance between speech and other values require discretion and cannot possibly be made algorithmically. The shift towards algorithms takes the heat off Facebook, as it can claim the machine made it do something. It is a troubling move.

Jeffrey Rosen was talking to Brendan O’Neill in the latest episode of The Brendan O’Neill Show. Listen to the full conversation here:

Picture by: Getty.

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Covid-19 Free Speech Politics Science & Tech USA

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today