By Paul Reilly, Senior Lecturer in Communications, Media & Democracy, University of Glasgow

Recent weeks has seen a significant rise in hate speech on Twitter. The ‘free speech maximalism’ championed by Elon Musk has been a dog whistle for those seeking to spread hate and bigotry online. There has inevitably led to renewed calls for online platforms to face greater sanctions for amplifying hate speech and other online harms.

This was one of the driving forces behind the 2016 EU Code of Conduct on countering illegal hate speech online, which committed companies such as Facebook, Microsoft, Twitter and YouTube to review content flagged as illegal hate speech within 24 hours and remove access to it if appropriate.

In response, these sites amended their terms of service to provide clarity on what they defined as ‘hateful’ content. For example, in an update to its hate speech policy in June 2019, YouTube, for example, stated that it would remove content promoting violence or hatred against individuals or groups based on 13 attributes including age, disability, ethnicity, gender identity and race. Yet even before Musk’s takeover, sites like Twitter had a patchy record when it comes to removing such content.

While platforms have generally complied with the requirement to review content within 24 hours, the removal of flagged hate speech remains inconsistent. The EU’s annual audit of how online platforms implemented its code of conduct suggests they removed less hate speech in 2021 compared to 2020. While 81 percent of cases were assessed within 24 hours, online platforms were found to have deleted 62.5 percent of flagged content between March and April 2021.

These companies have been overwhelmed by the number of posts being flagged by users for review over the past few years. However, in many cases content deemed to be offensive hate speech remains online because is judged not to have violated the platform’s policy on harmful conduct.

One study that used sock puppet accounts to examine whether Facebook removed such content found that only 48 percent of posts that met these criteria were deleted from the site, with the removal of misogynistic hate speech particularly inconsistent.

Hence it is no surprise that countries such as Germany introduced legislation such as Network Enforcement Law (NetzDG) which requires social media companies with more than two million users to remove or block access to such content within 24 hours of notification, or face a potential 50 million dollar fine for non-compliance.

A similar approach has been advocated as part of the UK’s Online Safety Bill, which has promised Ofcom the power to fine platforms up to 10% of their annual turnover if they don’t deal with illegal content. Whether such sanctions have the desired impact of reducing online harms remains to be seen; some commentators have expressed concerns that platforms’ tendency to use automated filtering systems might lead to overblocking that negatively impacts freedom of speech online.

The fundamental problem is the ‘publish then filter’ model of commercial online platforms. Regulation is retrospective because it relies on offended users flagging content to be reviewed by platforms and the removed if deemed to violate its terms of service. These are not town halls or public squares in which ideas are exchanged freely and respectfully. Platforms benefit financially from every click, like, and share; perhaps inevitably, the most controversial content is the most profitable.

Therefore, the threat of fines is unlikely to prompt these companies to review content before it makes it available on their sites. This raises important questions about whether online platforms should be subject to the same regulations as other media publishers.

While they strategically used the platform metaphor to portray their sites as ‘champions of free speech’, researchers such as Tarleton Gillespie argue these companies are in fact ‘internet intermediaries’ that share content between producers and users, as well as between citizens and elites. Now is the time to ensure they are held accountable for content before it is amplified on their platforms.

While far from perfect, the system of media regulation in countries such as the UK and Ireland does at least ensure that they cannot broadcast hate speech, disinformation and other harmful content to their audiences. If we are serious about addressing online harms then it’s time we thought of Facebook, Twitter and YouTube along similar lines. Perhaps it’s time for platforms to abandon the ‘publish then filter’ principle in favour of ‘publish with care’.

 
[asp_product id="2371"]