No, social media are not destroying democracy

There’s simply no good evidence to show that voters are being driven to partisan extremes online.

Matthew Lesh

Share
Topics Free Speech

Have you heard that social media are destroying democracy? What about the idea that Facebook ‘filter bubbles’ are creating echo chambers and driving up partisanship? Maybe you’ve read that YouTube’s recommendation algorithm drives people into ‘rabbit holes’ with ever-more extreme content? Or that the business model of social-media firms means people routinely come across misinformation and fake news?

These claims have become accepted truths in debates about social media. But there’s one (pretty major) problem with them. There’s a serious lack of evidence to show that any of these claims are true – and if they are at all true, these issues only seem to impact a small minority of people.

This is the central conclusion of a recent New Yorker article by staff writer Gideon Lewis-Kraus. The piece chronicles the response to social psychologist Jonathan Haidt’s recent essay, published in the Atlantic in April, which sought to blame social media for political dysfunction. Haidt, Lewis-Kraus suggests, appears to have greatly exaggerated the level of certainty in the academic literature.

This is neatly demonstrated by a Google Doc collaborative review of the relevant literature, started by Haidt and fellow academic Chris Bail. The review lists dozens of academic articles about political partisanship and social media. But the evidence is extremely mixed.

For example, some research suggests that social media ‘reinforces the expressers’ partisan thought process and hardens their pre-existing political preferences’. But other studies have found that partisanship has grown more among groups who are less likely to use social media, that it has grown more in the US despite social media expanding across the planet, and that Facebook’s news feature may even reduce polarisation by exposing people to more viewpoints.

On the question of filter bubbles and echo chambers for news, the evidence is just as mixed. One study found that Facebook’s algorithm fails to supply people with news that challenges their attitudes. But other studies suggest that most users (and conservatives especially) subscribe to a variety of media outlets, that social media actually drive a more diverse array of news sources, and that social media might actually help decrease support for right-wing populist candidates and parties.

Then there’s the issue of foreign disinformation allegedly warping elections. While there is evidence that material from Russia’s Internet Research Agency has reached tens of millions of people in the West in recent years, it is less clear that this has had a strong impact. Russian trolls have largely interacted with individuals who are already highly polarised. Furthermore, studies indicate that just 0.1 per cent of people share 80 per cent of the ‘fake news’ that is in circulation.

To the extent that serious issues with social media have been identified, many studies indicate that they are not widespread. For example, one study found that just five per cent of users are in a news echo chamber. On the question of YouTube rabbit holes, another study found that extremist videos are largely watched by people who already hold extremist views and that other people are not driven to that content by recommendations.

This all points to the possibility that partisanship, extreme views and political dysfunction are driven by deeper social and cultural factors. To the extent that you come across more extreme views on social media it is because individuals who are more partisan are more likely to use those platforms. But it is easier to blame social media for, say, the election of Donald Trump than it is to address the disenchantment that drove his victory in 2016.

Overall, the literature review is a fascinating case study in the complexities of academic debate and the often contradictory, uncertain and unreplicable nature of social-science research. This would all be of little concern outside of academic circles were it not for the fact that policymakers, particularly in the UK, are rushing ahead with new legislation that aims to clamp down on social-media companies based on shaky evidence that they have only partially understood. UK culture secretary Nadine Dorries reportedly arrived at a meeting at Microsoft’s headquarters recently and immediately asked when they were going to get rid of algorithms.

The preoccupation with social-media algorithms weighs heavily in Dorries’ Online Safety Bill, which is currently being debated in parliament with limited opposition. One of the objectives of the bill is to reduce the alleged harm caused by algorithms and other design features. The government has announced that misinformation and disinformation will likely be among the types of ‘legal but harmful’ speech that will be tackled by the bill.

The narrative about social media destroying democracy ignores the many positives of social media, including their ability to connect people and enable the discovery of new information. The result of all the duties in the Online Safety Bill will be to undermine freedom of expression, by encouraging platforms to show users less controversial content and empowering the easily offended to demand content be removed.

And all of this is based on an unproven premise that social media are responsible for many of society’s ills.

Matthew Lesh is the head of public policy at the Institute of Economic Affairs.

Picture by: Getty.

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Share
Topics Free Speech

Comments

Want to join the conversation?

Only spiked supporters, who donate regularly to us, can comment on our articles.

Become a spiked supporter
Share