

By Paul Reilly, Senior Lecturer in Communications, Media & Democracy, University of Glasgow
Company insiders recently claimed that Twitter is no longer capable of protecting users from trolling and online abuse. Recent events in Northern Ireland have illustrated the toxicity of online discourse on Twitter.
In the past week alone, Alliance Party leader Naomi Long and journalist Patricia Devlin have shared details about the abusive, misogynistic messages they have received on sites like Twitter. Their experiences of not being able to get such content removed by platforms, or have the police prosecute those responsible, are depressingly familiar.
As I have argued elsewhere, social media are commercial entities that do not filter content before publication. Their ‘Too Little, Too Late’ moderation policies rely on offended users flagging content already available on these platforms. Even when reported, there is no guarantee that such content will be removed.
For example, an EU audit found that Twitter removed less than half of the tweets flagged as harmful between March and April 2021. It is also debatable whether the more rigorous content moderation mandated by legislation such as the UK Online Safety Bill (OSB) would reduce the level of cyber abuse. Indeed, researchers have found that such an approach often displaces the problem of online hate and harassment to less well-known platforms rather than address its causes.
Naomi Long has called on Twitter to retain the details of account holders to help identify those responsible for threatening and abusive behaviour online. This is of course not the first time that ending anonymity on social media has been proposed as a solution for online abuse. In February 2022, then Culture Secretary Nadine Dorries amended the draft OSB to require large online platforms to introduce user verification systems as a means of tackling cyber abuse. Crucially, it stopped short of calling for a complete ban on online anonymity on the grounds that it was important to safeguard freedom of expression online. It was recognised that there were many groups who would benefit from such anonymity, including victims of domestic abuse and LGBTQI+ people.
While anonymity may lead to toxic forms of online disinhibition such as abuse and trolling, its removal will not cause online harassment to cease overnight. For example, there is insufficient data to show that the real name policies introduced by Facebook and YouTube have reduced hate speech and harassment on their respective platforms. Moreover, critics argue that it is the lack of accountability that encourages people to harass others online, rather than anonymity itself.
Holding people accountable for online abuse would certainly be a good start, but we also need to find ways to prevent these online behaviours. One approach might involve warnings being sent to those posting threatening or abusive content. There have been a number of Twitter experiments showing how warning messages can reduce the willingness of tweeters to share harmful content. In May 2020, for example, Twitter tested prompts encouraging users to reconsider whether they should send potentially harmful or offensive replies before they were posted. The social media company claimed that this resulted in 34 percent of tweeters revising their initial replies or choosing not to send one at all, with those receiving these prompts composing 11 percent fewer offensive responses after this intervention.
An even more effective approach involves reminding the sender that people can be hurt by their online comments. An experiment, which used a ‘bot’ to send empathy-based messages to 1350 users who had posted racist or xenophobic tweets found that, on average, recipients were 8.4 percent more likely to delete their original tweet after this intervention. There is an even greater chance that users refrain from using such language in future tweets when the counter speaker is a trusted confidante.
While content moderation and user verification may have a role to play, the most effective way to address cyber abuse is to show offenders the consequences of their actions. While public figures may be considered ‘fair game’ for online criticism, it’s time to play the ball, not the person.