Twitter’s Edit Button Won’t Fix The Real Problems

Apr 11, 2022 | online safety, tech ethics

This week Twitter announced it is working on an ‘edit’ button to enable users to correct typos, re-tag users or further hone their witticisms after they have posted a tweet – a function both Facebook and Instagram have had for some time. Twitter’s spokesman said the Twitter edit button is “the most requested Twitter feature for many years”.

The Twitter edit button won’t fix trolling

It’s somewhat surprising Twitter report an edit button is the feature most users want. The issue which should surely be consuming all their attention is how to remove the trolling and abuse which characterises the platform’s culture.

I’ve written about the online trolling problem in my book ‘My Brain Has Too Many Tabs Open‘ and a 2018 report from Amnesty International which looked specifically at women being trolled on Twitter, indicated the scale of the problem. The Troll Patrol report found that;

  • Approximately one abusive or problematic tweet was sent to women every thirty seconds;
  • 7.1% of all tweets sent to women were problematic or abusive, amounting to 1.1 million tweets;
  • Black women were 84% more likely than white women to be mentioned in abusive or problematic tweets;
  • Online abuse against women cut across the political spectrum. Politicians and journalists faced similar levels of online abuse with both liberals and conservatives affected.

“These results back up what women have long been saying – that Twitter is endemic with racism, misogyny and homophobia”. 

Kate Allen, Amnesty UK’s Director

But it’s not just women who are targeted online. A recent Pew Internet Survey found that four out of 10 people online have been harassed online, with far more having witnessed such behaviour happening to others.

Could behavioural ‘nudges’ work?

Here in the UK we’re still waiting for the Online Safety Bill to be passed into legislation which has some provisions which may help the online abuse problem. The relevant measures will give social media users the power to block interactions with anonymous accounts, with anonymity being viewed by many as emboldening users to troll.

But some studies suggest that simply reminding users of the rules of good behaviour, or the rules of the specific site or platform they are on, may also be effective. Even just pinning a post about a community’s rules to the top of discussion pages helps, as a 2016 experiment conducted on Reddit showed – increasing the chance a poster would follow the community conduct rules by 8%.

Twitter itself has been working to remind ‘nudge’ users into better behaviour on its platform by adding friction – in the form of prompts – to undesirable activities such as retweeting a post without reading it, or posting a tweet containing abusive content.

The Twitter button will have to work very hard to stop online abuse
Will nudging users to re-think their posts stop online abuse?

Using artificial intelligence (AI) to detect tweet content that may be harmful or offensive, Twitter now prompts users who are about to send such a tweet if they “want to review this before tweeting”, with the options to edit, delete, or send anyway.

Twitter say that during tests before the new feature was launched they found that:

  • If prompted, 34% of people revised their initial reply or decided to not send their reply at all.
  • After being prompted once, people composed, on average, 11% fewer offensive replies in the future;
  • If prompted, people were less likely to receive offensive and harmful replies back.

There have been criticisms of the feature as the AI was found by some users to struggle to distinguish between sarcasm and irony and genuinely offensive language.

Collective responsibility

While the Twitter edit button may allow users to rethink what they have posted after the event and potentially tone down any online flaming, some of the answers to the online culture problem undoubtedly lie within our own control.

The 2014 Pew Research report found that 60% of internet users said they had witnessed someone being called offensive names online and 24% had witnessed someone being harassed online for a sustained period of time. It’s up to all of us to call out and report behaviour like this when we see it. We can make a difference if we all do that – without waiting for platforms to belatedly wake up to the scale of the problem.

My Brain Has Too Many Tabs Open by Tanya Goodin

For more about trolling, online abuse, and a manifesto for good digital citizenship – pick up a copy of my new book.