The year Big Tech became the Ministry of Truth
In 2020, social-media firms suppressed protest, scientific debate and corruption allegations against the powerful.
In this year of lockdown, it is not just our movements, social lives and work that have been restricted. Big Tech has also stepped up its efforts at closing down what we can say and read. Two issues in particular have been central to Silicon Valley’s escalating war on wrongthink: the Covid-19 pandemic and the US presidential elections.
It’s hard now to imagine the time when Twitter’s managing director proudly declared his social-media platform to be ‘the free-speech wing of the free-speech party’. Since then, free speech has suffered death by a thousand cuts. Initially, the screws on free expression were tightened under the guise of eliminating ‘hate speech’, but this year, ‘misinformation’ has become the main pretext for censorship.
Since the arrival of the pandemic, the World Health Organisation has warned breathlessly of a parallel threat to health – the ‘infodemic’. It was feared that fake news about the virus could undermine public-health messaging and containment efforts and encourage reckless behaviour.
But who defines what counts as Covid misinformation? Certainly, the first things to be taken offline were obviously nonsense, such as David Icke’s conspiracy theories linking the virus to 5G. But it quickly became clear that misinformation would essentially be defined as anything that contradicts the authorities.
In April, Facebook began removing pages and posts which were used to organise protests against the lockdown. It had actually reached out to various US state governments to ascertain which posts to take down. Any advertised gathering which did not ‘follow the health parameters established by the government’ and was ‘therefore unlawful’ was liable for removal. In essence, this was state censorship outsourced to the private sector.
When asked what constituted misinformation, YouTube CEO Susan Wojcicki said her platform will remove ‘anything that is medically unsubstantiated’, as well as ‘anything that goes against WHO [World Health Organisation] recommendations’.
But these are two different things entirely. Over the course of the pandemic, the WHO has changed its mind on the utility of lockdown and face masks (the latter due to political lobbying rather than medical evidence). Back in January, it relayed the view of the Chinese authorities that there was ‘no clear evidence of human-to-human transmission of the novel coronavirus’, a statement which, if it came out of a YouTuber’s mouth, would surely be banned as misinformation today.
When something was medically correct but went against government or WHO guidelines, then social-media platforms sided with the authorities. For instance, an article reporting on a major randomised control trial on the efficacy of masks against Covid, written by Carl Heneghan and Tom Jefferson of Oxford University’s Centre for Evidence-Based Medicine, was declared to be ‘misinformation’ by Facebook. Randomised control trials are generally seen as the gold standard of medical evidence, but apparently not for social-media ‘fact-checkers’.
Social media’s war on misinformation then went into overdrive in the run-up to the presidential election. In May, Twitter started fact-checking the words of President Trump, warning users about his claims that the upcoming election would be rigged against him. This was nonsense, of course, but Twitter’s move represented a worrying and unprecedented intervention into democracy. And things got worse. At the end of July, Twitter started removing some of Trump’s posts entirely for spreading misinformation about the coronavirus.
As the election drew closer, the social-media firms made the terrifying decision to remove and ban stories which were true, but were politically inconvenient. Many had blamed Facebook and Twitter for Trump’s shock win in 2016 – accusing the firms of hosting fake news and failing to spot foreign bots. Neither of these things could actually explain the ballot-box revolt, of course, but the platforms were determined never to be blamed again.
In mid-October, the New York Post published its exposé of Hunter Biden. Emails found on the ‘laptop from hell’, according to the Post, suggested that Biden Jr was able to grant access to his father, Joe Biden, for cash. These were allegations of corruption against one of the presidential candidates. But Silicon Valley quickly mobilised to crush the story.
Facebook’s communications director, Andy Stone, announced that ‘we are reducing its distribution on our platform’ as part of ‘our standard process to reduce the spread of misinformation’. The story was ‘eligible to be fact-checked’, but there has been no update from Facebook on it since, despite Hunter now facing a federal criminal investigation.
Twitter went even further to stop the spread. It blocked users from linking to the story and from posting photos from the reports. Users who clicked links that had already been posted were told that, ‘This link has been identified by Twitter or our partners as being potentially harmful’. This message was later updated to say, ‘The link you are trying to access has been identified by Twitter or our partners as being potentially spammy or unsafe’.
The New York Post was even locked out of its Twitter account for a number of weeks, stopping one of America’s oldest newspapers (founded by Alexander Hamilton, no less) from sharing any of its stories on Twitter. When Trump’s press secretary, Kayleigh McEnany, tweeted the story from her personal account, she was locked out, too. This is for sharing a story that ‘fact checkers’, try as they might, could not say was false. Many of the allegations were never denied by the Biden team, either.
2020 was the year that the social-media giants truly bared their teeth. Facebook and Twitter stopped being facilitators of debate and discussion, and instead decided what was true or false, supposedly on our behalf. A dangerous development.
Picture by: Getty.
To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.