Donate

Why we need free speech online

In their crusade against 'hate speech', regulators want to subject all internet users to a system of parental controls.

Sandy Starr

Topics Books

The rush to find new legislation outlawing ‘hate speech’ on the internet has become a Europe-wide project. The ‘Brussels Declaration’ issued by the Organisation for Security and Cooperation in Europe (OSCE) – which came out of the proceedings of its Conference on Tolerance and the Fight against Racism, Xenophobia and Discrimination, in which I participated in Brussels in September 2004 – commits OSCE member states to ‘combat hate crimes, which can be fuelled by racist, xenophobic and anti-Semitic propaganda in the media and on the internet’ (1)

The chair of the European Network Against Racism, a prominent network of non-governmental organisations, argued at the same Brussels conference that ‘any effective instrument to fight racism’ in law should criminalise ‘incitement to racial violence and hatred’, ‘public insults on the ground of race’, ‘the condoning of crimes of genocide, crimes against humanity and war crimes’, ‘the denial or trivialisation of the Holocaust’, ‘public dissemination of racist or xenophobic material’, and ‘directing, supporting or participating in the activities of a racist or xenophobic group’. Additionally, ‘racist motivation in common crimes should be considered an aggravating circumstance’ – as it already is in UK law (2).

As the idea that ‘hate speech’ is a growing problem in need of official regulation and censorship has reached prominence across Europe, it is not surprising that the internet has emerged as a particular focus for concern. The internet poses a challenge to older forms of regulation and makes a nonsense of boundaries between jurisdictions. There have been calls for the authorities to close down websites such as Redwatch and Noncewatch – both of which are linked to the fascist organisation Combat 18, and which contain hitlists of supposed Marxists and paedophiles respectively. More humorous websites, such as I Hate Hawick (now defunct) – which consisted largely of strongly-worded invective against the Scottish town of Hawick and its rugby fans – have also come under fire for preaching hate (which is ironic, given that one of the things the website took Hawick’s residents to task for was their alleged racism) (3).

But what does it mean, to attempt to outlaw ‘hate speech’ from the internet? This discussion has disturbing implications, both for the future of the internet and society’s approach to free speech more broadly.

  • Regulating hate speech on the internet

The internet continues to be perceived as a place of unregulated and unregulable anarchy. But this impression is becoming less and less accurate, as governments seek to monitor and rein in our online activities.

Initiatives to combat online hate speech threaten to neuter the internet’s most progressive attribute – the fact that anyone, anywhere, who has a computer and a connection, can express themselves freely on it. In the UK, the regulator the Internet Watch Foundation (IWF) advises that if you ‘see racist content on the internet’, then ‘the IWF and police will work in partnership with the hosting service provider to remove the content as soon as possible’ (4).

The presumption here is clearly in favour of censorship – the IWF adds that ‘if you are unsure as to whether the content is legal or not, be on the safe side and report it’ (5). Not only are the authorities increasingly seeking out and censoring internet content that they disapprove of, but those sensitive souls who are most easily offended are being enlisted in this process, and given a veto over what the rest of us can peruse online.

The Council of Europe’s Additional Protocol to the Convention On Cybercrime, which seeks to prohibit ‘racist and xenophobic material’ on the internet, defines such material as ‘any written material, any image or any other representation of ideas or theories, which advocates, promotes or incites hatred, discrimination or violence, against any individual or group of individuals, based on race, colour, descent or national or ethnic origin, as well as religion if used as a pretext for any of these factors’. Can we presume that online versions of the Bible and the Koran will be the first things to go, under this regime? Certainly, there are countless artistic and documentary works that could fall afoul of such all-encompassing regulation.

In accordance with the commonly stated aim of hate speech regulation, to avert the threat of fascism, the Additional Protocol also seeks to outlaw the ‘denial, gross minimisation, approval or justification of genocide or crimes against humanity’. According to the Council of Europe, ‘the drafters considered it necessary not to limit the scope of this provision only to the crimes committed by the Nazi regime during the Second World War and established as such by the Nuremberg Tribunal, but also to genocides and crimes against humanity established by other international courts set up since 1945 by relevant international legal instruments’.

This is an instance in which the proponents of hate speech regulation, while ostensibly guarding against the spectre of totalitarianism, are acting in a disconcertingly authoritarian manner themselves. Holocaust denial is one thing – debate over the scale and causes of later atrocities, such as those in the Sudan or the former Yugoslavia, and whether it is right to describe such conflicts in terms of genocide, is another, and there is an ongoing and legitimate debate about these issues. Yet the European authorities stand to gain new powers that will entitle them to impose upon us their definitive account of recent history, which we must accept as true on pain of prosecution.

The restrictions on free speech contained in the Additional Protocol could have been even more severe than they currently are. Apparently, ‘the committee drafting the Convention discussed the possibility of including other content-related offences’, but ‘was not in a position to reach consensus on the criminalisation of such conduct’ (6). Still, the Additional Protocol as it stands is a significant impediment to free speech, and an impediment to the process of contesting bigoted opinions in open debate.

As one of the Additional Protocol’s more acerbic critics remarks: ‘Criminalising certain forms of speech is scientifically proven to eliminate the underlying sentiment. Really, I read that on a match cover.’ (7) Proof, perhaps, that you cannot believe everything that you read in the bar. The idea that censorship leads people to speak and act in the correct way is a highly dubious and contested concept. What is certainly true, though, is that once free speech is limited it ceases to be free.

  • Once free speech is limited, it ceases to be free

Those who argue for the regulation of hate speech often claim that they support the principle of free speech, but that there is some kind of distinction between standing up for free speech as it has traditionally been understood, and allowing people to express hateful ideas. So when he proposed to introduce an offence of incitement to religious hatred into British law, former UK home secretary David Blunkett insisted that ‘people’s rights to debate matters of religion and proselytise would be protected, but we cannot allow people to use religious differences to create hate’ (8).

Divvying up the principle of free speech in this way, so that especially abhorrent ideas are somehow disqualified from its protection, is a dubious exercise. After all, it’s not as though free speech contains within it some sort of prescription as to what the content of that speech will consist of. Any such prescription would be contrary to the essential meaning of the word ‘free’.

The Additional Protocol to the Convention On Cybercrime invokes ‘the need to ensure a proper balance between freedom of expression and an effective fight against acts of a racist and xenophobic nature’. But this notion of ‘balance’ is questionable. Unless we’re free to say what we believe, to experience and express whatever emotion we like (including hate), and to hate whomever we choose, then how can we be said to be free at all? (9)

According to the European human rights tradition, rights often have to be balanced with one another and with corresponding responsibilities. Even the most prominent advocates of human rights agree that this can be a tricky exercise. At an event on freedom of expression and the internet, organised by the United Nations Educational, Scientific and Cultural Organisation (UNESCO) at its Paris headquarters in February 2005, I found myself speaking alongside the barrister, sometime judge and formidable human rights theorist Geoffrey Robertson (10). I put it to him that the exceptions to freedom of expression that he was endorsing, in instances of incitement to racial hatred or genocide, amounted to an indefensible restriction on free speech. ‘Human rights law is a bugger’, he replied ruefully.

The American constitutional model, however, is far less ambiguous about the need to uphold certain freedoms, freedom of speech among them, without compromise. The fact that the degree of free speech enjoyed on the internet over the past decade has, at least initially, conformed more to American standards than to European standards, has been a cause of exasperation for some. Technology commentator Bill Thompson, for instance, disparages ‘the USA, where any sensible discussion is crippled by the Constitution and the continued attempts to decide how many Founding Fathers can stand on the head of a pin’, and where ‘they decide to run their part of the net according to the principles laid down 250 years ago by a bunch of renegade merchants and rebellious slave owners’ (11).

Free speech has even been subject to certain exceptions in the USA, most notably according to the principle of ‘clear and present danger’. This exception has been used as a justification for regulating hate speech, but is in fact a very specific and narrow exception, and as originally conceived does not support the idea of hate speech at all. ‘Clear and present danger’ was conceived by the Supreme Court Justice Oliver Wendell Holmes Jr, with reference to those exceptional circumstances where rational individuals can be said to be compelled to act in a certain way. In Holmes Jr’s classic example – ‘a man falsely shouting fire in a theatre and causing a panic’ – rational individuals are compelled to act by immediate fear for their safety (12).

In the vast majority of instances, however – including incitement to commit a hateful act – no such immediate fear exists. Rather, there is an opportunity for the individual to assess the words that they hear, and to decide whether or not to act upon them. It is therefore the individual who bears responsibility for his actions, and not some third party who incited that individual to behave in a particular way. While it’s understandably disconcerting, to take one example, for Nick Ryan – who writes books and makes programmes exposing the far right – to encounter a message board posting about him saying ‘someone should knife this cunt’, such words are not in themselves a legitimate pretext for censoring internet content (13).

The issue is not about the right of a handful of individuals to peddle hateful content. Who really cares if they have a voice or not? But what the concern about online hate speech reveals is the level of official contempt for users of the internet. There is a fear that people reading hateful content on their computer will unwittingly take those ideas on board, and be incited to commit violent acts as a result. Therefore, it is assumed that the public needs protection from hateful ideas online in much the same way that children are protected from sites containing pornography and violence. But adult internet users are not children, and nor are they stupid or so easily influenced.

We know that the internet is host to a multitude of ideas: some good, some bad, and some that are simply unworthy of our attention. To assume that internet users are incapable of filtering these ideas for themselves shows a high level of disdain for all of us, as though we are all potentially violent criminals, who only need to view a website to make us act on our base instincts. To counter this view, we need to take a step back from the easy assumptions that the authorities make about censoring hate speech, and understand why these assumptions are wrong.

  • Distinguishing speech from action, and prejudice from emotion

The British academic David Miller, an advocate of hate crime legislation, complains that ‘advocates of free speech tend to assume that speech can be clearly separated from action’ (14). But outside of the obscurer reaches of academic postmodernism, one would be hard-pressed to dispute that there is a distinction between what people say and think on the one hand, and what they do on the other.

Certainly, it becomes difficult, in the absence of this basic distinction, to sustain an equitable system of law. If our actions are not distinct from our words and our thoughts, then there ceases to be a basis upon which we can be held responsible for those actions. Once speech and action are confused, then we can always pass the buck for our actions, no matter how grievous they are – an excuse commonly known as ‘the Devil made me do it’.

It is not words in themselves that make things happen, but the estimation in which we hold those words. And if ideas that we disagree with are held in high estimation by others, then we’re not going to remedy this situation by trying to prevent those ideas from being expressed. Rather, the only legitimate way we can tackle support for abhorrent ideas, is to seek to persuade the public of our own point of view, through political debate. When the authorities start resorting to hate speech regulation, in order to suppress ideas that they object to, this is an indication that the state of political debate is far from healthy.

As well as distinguishing between speech and action, when assessing the validity of hate speech as a regulatory category, it is also useful to make a distinction between forms of prejudice such as racism, and generic emotions. Whereas racism is a prejudice that deserves to be contested, hatred is not objectionable in itself. Hatred is merely an emotion, and it can be an entirely legitimate and appropriate emotion at that.

When the Council of Europe sets out to counter ‘hatred’, with its Additional Protocol to the Convention On Cybercrime, it uses the word to mean ‘intense dislike or enmity’. But are right-thinking people not entitled to feel ‘intense dislike or enmity’? Hate is something that most of us experience at one time or another, and is as necessary and valid an emotion as love. Even David Blunkett, the principal architect of initiatives against hate speech and hate crimes in the UK, has admitted that when he heard that the notorious serial killer Harold Shipman had committed suicide in prison, his first reaction was: ‘Is it too early to open a bottle?’ (15) Could he even say that, under a regime where hate speech was outlawed?

Hate speech regulation is often posited as a measure that will prevent society from succumbing to totalitarian ideologies, such as fascism. Ironically, however, the idea that we might regulate speech and prosecute crimes according to the emotions we ascribe to them, is one of the most totalitarian ideas imaginable.

Most countries already have laws that prohibit intimidation, assault, and damage to property. By creating the special categories of ‘hate speech’ and ‘hate crime’ to supplement these offences, and presuming to judge people’s motivations for action rather than their actions alone, we come worryingly close to establishing in law what the author George Orwell called ‘thoughtcrime’.

In Orwell’s classic novel Nineteen Eighty-Four, thoughtcrime is the crime of thinking criminal thoughts, ‘the essential crime that contained all others in itself’. Hatred is permitted, indeed is mandatory, in Orwell’s dystopia, so long as it is directed against enemies of the state. But any heretical thought brings with it the prospect of grave punishment. Orwell demonstrates how, by policing language and by forcing people to carefully consider every aspect of their behaviour, orthodoxy can be sustained and heresy ruthlessly suppressed.

The human instinct to question received wisdom and resist restrictions upon thought is, ultimately and thankfully, irrepressible. But inasmuch as this instinct can be repressed, the authorities must first encourage in the populace a form of wilful ignorance that Orwell calls ‘crimestop’ – in Nineteen Eighty-Four, the principal means of preventing oneself from committing thougtcrime. In Orwell’s words: ‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments…and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ (16)

Labelling speech that we disagree with ‘hate speech’, and seeking to prohibit it instead of taking up the challenge of disputing it, points to a world in which we resort to ‘protective stupidity’ to prevent the spread of objectionable ideas. Not only is this inimical to freedom, but it gives objectionable ideas a credibility that they often don’t deserve, by entitling them to assume the righteous attitude of challenging an authoritarian status quo. This is particularly stark when applied to the internet – where so many ideas float around, and many of these deserve no credibility at all.

  • Putting the internet into perspective

The internet lends itself to lazy and hysterical thinking about social problems. Because of the enormous diversity of material available on it, people with a particular axe to grind can simply log on and discover whatever truths about society they wish to. Online, one’s perspective on society is distorted. When there are so few obstacles to setting up a website, or posting on a message board, all voices appear equal.

The internet is a distorted reflection of society, where minority and extreme opinion are indistinguishable from the mainstream. Methodological rigour is needed, if any useful insights into society are to be drawn from what one finds online. Such rigour is often lacking in discussions of online hate speech.

For example, the academic Tara McPherson has written about the problem of deep-South redneck websites – what she calls ‘the many outposts of Dixie in cyberspace’. As one reads through the examples she provides of neo-Confederate eccentrics, one could be forgiven for believing that ‘The South Will Rise Again’, as the flags and bumper stickers put it. But by that token, the world must also be under dire threat from paedophiles, Satanists, and every other crackpot to whom the internet provides a free platform.

‘How could we narrate other versions of Southern history and place that are not bleached to a blinding whiteness?’, asks McPherson, as though digital Dixie were a major social problem (17). In its present form, the internet inevitably appears to privilege the expression of marginal views, by making it so easy to express them. But the mere fact of an idea being represented online does not grant that idea any great social consequence.

Of course, the internet has made it easier for like-minded individuals on the margins to communicate and collaborate. Mark Potok, editor of the Southern Poverty Law Centre’s Intelligence Report – which ‘monitors hate groups and extremist activities’- has a point when he says: ‘In the 1970s and 80s the average white supremacist was isolated, shaking his fist at the sky in his front room. The net changed that.’ French minister of foreign affairs Michel Barnier makes a similar point more forcefully, when he says: ‘The internet has had a seductive influence on networks of intolerance. It has placed at their disposal its formidable power of amplification, diffusion and connection.’ (18)

But to perceive this ‘power of amplification, diffusion and connection’ as a momentous problem is to ignore its corollary – the fact that the internet also enables the rest of us to communicate and collaborate, to more positive ends. The principle of free speech benefits us all, from the mainstream to the margins, and invites us to make the case for what we see as the truth. New technologies that make it easier to communicate benefit us all in the same way, and we should concentrate on exploiting them as a platform for our beliefs, rather than trying to withdraw them as a platform for other people’s beliefs.

We should always keep our wits about us, when confronted with supposed evidence that online hate speech is a massive problem. A much-cited survey by the web and email filtering company SurfControl concludes that there was a 26 percent increase in ‘websites promoting hate against Americans, Muslims, Jews, homosexuals and African-Americans, as well as graphic violence’ between January and May 2004, ‘nearly surpassing the growth in all of 2003’. But it is far from clear how such precise quantitative statistics can be derived from subjective descriptions of the content of websites, and from a subjective emotional category like ‘hate’.

SurfControl’s survey unwittingly illustrates how any old piece of anecdotal evidence can be used to stir up a panic over internet content, claiming: ‘Existing sites that were already being monitored by SurfControl have expanded in shocking or curious ways. Some sites carry graphic photos of dead and mutilated human beings.’ (19) If SurfControl had got in touch with me a few years ago, I could easily have found a few photos of dead and mutilated human beings on the internet for them. Maybe then, they would have tried to start the same panic a few years earlier? Or do they wheel out the same alarmist claims every year?

Certainly, it’s possible to put a completely opposite spin on the amount of hate speech that exists on the internet. For example, Karin Spaink, chair of the privacy and digital rights organisation Bits of Freedom, concludes that ‘slightly over 0.015 per cent of all web pages contain hate speech or something similar’ – a far less frightening assessment (20).

It’s also inaccurate to suggest that the kind of internet content that gets labelled as hate speech goes unchallenged. When it transpired that the anti-Semetic website Jew Watch ranked highest in the search engine Google’s results for the search term ‘Jew’, a Remove Jew Watch campaign was established, to demand that Google remove the offending website from its listings. Fortunately for the principle of free speech, Google did not capitulate to this particular demand – even though in other instances, the search engine has been guilty of purging its results, at the behest of governments and other concerned parties (21).

Forced to act on its own initiative, Remove Jew Watch successfully used Googlebombing – creating and managing web links in order to trick Google’s search algorithms into associating particular search terms with particular results – to knock Jew Watch off the top spot. Such technical ‘no platform’ campaigns are at least preferable to Google (further) compromising its ranking criteria (22). But how much better would have been a decision that Jew Watch was beneath contempt and should simply be ignored. Not every crank and extremist warrants serious attention, even if they do occasionally manage to spoof search engine rankings.

According to the Additional Protocol to the Convention On Cybercrime, ‘national and international law need to provide adequate legal responses to propaganda of a racist and xenophobic nature committed through computer systems’ (23). But legal responses are entirely inadequate for this purpose. If anything, legal responses to hateful opinions inadvertently bolster them, by removing them from public scrutiny and debate, and giving their proponents cause to pose as the champions of free speech online.

‘Hate speech’ is not a useful way of categorising ideas that we find objectionable. Just about the only thing that the category does usefully convey is the attitude of policymakers, regulators and campaigners towards people who use the internet. We are accorded the status of young children, uneducated, excitable and easily-led, who need a kind of parental control system on the internet to prevent us from accessing inappropriate content. The reaction to a few cranks posting their odious thoughts online is to limit all internet users’ freedom about what they write and read. In seeking to restrict a communications medium in this way, it is the regulators who really hate speech.

Read on:

spiked-issue: Free speech

(1) Brussels Declaration (.pdf 20.1 KB), Organisation for Security and Cooperation in Europe, 14 September 2004, p3

(2) European Network Against Racism oral contribution (.pdf 43.0 KB), Bashy Quraishy, European Network Against Racism, 13 September 2004, p1

(3) See the Redwatch and Noncewatch websites; ‘I Hate Hawick’ website silenced, to Teries’ relief, William Chisholm, Scotsman, 29 January 2004

(4) Racial issues, on the Internet Watch Foundation website

(5) The hotline and the law, on the Internet Watch Foundation website

(6) Additional Protocol to the Convention On Cybercrime, Concerning the Criminalisation of Acts of a Racist and Xenophobic Nature Committed Through Computer Systems (.doc 71KB), Council of Europe, 28 January 2003, p3-4; Explanatory report, Additional Protocol to the Convention On Cybercrime, Concerning the Criminalisation of Acts of a Racist and Xenophobic Nature Committed Through Computer Systems, Council of Europe, 28 January 2003

(7) Euro thought police criminalise impure speech online, Thomas C Greene, Register, 11 November 2002

(8) New challenges for race equality and community cohesion in the twenty-first century (.pdf 104 KB), David Blunkett, Home Office, 7 July 2004, p12

(9) Additional Protocol to the Convention On Cybercrime, Concerning the Criminalisation of Acts of a Racist and Xenophobic Nature Committed Through Computer Systems (.doc 71KB), Council of Europe, 28 January 2003, p2. See Don’t you just hate the Illiberati?, by Mick Hume

(10) The conference, Freedom of Expression in Cyberspace, was held on 3-4 February 2005. See the Geoffrey Robertson website

(11) Damn the Constitution: Europe must take back the web, Bill Thompson, Register, 9 August 2002

(12) Schenck v United States, Oliver Wendell Holmes Jr, 3 March 1919

(13) Cited in Fear and loathing, Nick Ryan, Guardian, 12 August 2004

(14) Not always good to talk, Ursula Owen and David Miller, Guardian, 27 March 2004

(15) Explanatory report, Additional Protocol to the Convention On Cybercrime, Concerning the Criminalisation of Acts of a Racist and Xenophobic Nature Committed Through Computer Systems, Council of Europe, 28 January 2003;Blunkett admits Shipman error, BBC News, 16 January 2004

(16) 1984, George Orwell, Harmondsworth: Penguin, 2000, p21, 220-221

(17) ‘I’ll take my stand in DixieNet’, Tara McPherson, in Race in Cyberspace, ed Beth E Kolko, Lisa Nakamura, Gilbert B Rodman, New York: Routledge, 2000, p117, 128. For a review of this book, see ‘Race in Cyberspace’, Sandy Starr, in Global Review of Ethnopolitics, vol 1, no 4 (.pdf 903 KB), p132-134

(18) Intelligence Project section of the Southern Poverty Law Centre website; Quoted in Fear and loathing, Nick Ryan, Guardian, 12 August 2004; Opening of the meeting (.pdf 19.2 KB), OSCE Meeting on the Relationship Between Racist, Xenophobic and Anti-Semitic Propaganda on the Internet and Hate Crimes, Michel Barnier, French Ministry of Foreign Affairs, 16 June 2004, p2

(19) SurfControl reports unprecedented growth in hate and violence sites during first four months of 2004, SurfControl, 5 May 2004

(20) Is prohibiting hate speech feasible – or desirable?: technical and political considerations (.pdf 50.1 KB), Karin Spaink, Bits of Freedom, 30 June 2004, p14

(21) See the Google and Jew Watch websites; Replacement of Google with alternative search systems in China: documentation and screenshots, Berkman Center for Internet and Society, September 2002; Localised Google search result exclusions, Benjamin Edelman and Jonathan Zittrain, Berkman Center for Internet and Society, October 2002; Empirical Analysis of Google SafeSearch, Benjamin Edelman, Berkman Center for Internet and Society, April 2003

(22) See Dropping the bomb on Google, John Brandon, Wired News, 11 May 2004. For more on the technology and politics of Google search results, see Google hogged by blogs and Giddy over Google, by Sandy Starr

(23) Additional Protocol to the Convention On Cybercrime, Concerning the Criminalisation of Acts of a Racist and Xenophobic Nature Committed Through Computer Systems (.doc 71KB), Council of Europe, 28 January 2003, p2

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Books

Comments

Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today