Connect with us

SOCIAL

Twitter’s Working on a New ‘Safety Mode’ to Limit the Impact of On-Platform Abuse

Published

on

Amongst Twitter’s various announcements in its Analyst Day presentation today, including subscription tools and on-platform communities, it also outlined its work on a new anti-troll feature, which it’s calling ‘Safety Mode’

Twitter Safety Mode

As you can see here, the new process would alert users when their tweets are getting negative attention. Tap through on that notification and you’ll be taken to the ‘Safety Mode’ control panel, where you can choose to activate ‘auto-block and mute’, which will then, as it sounds, automatically stop any accounts that are sending abusive or rude replies from engaging with you for one week.

But you won’t have to activate the auto-block function – as you can see below the auto-block toggle, users will also be able to review the accounts and replies Twitter’s system has identified as being potentially harmful. You would then be able to review and block as you see fit. So if your on-platform connections have a habit of mocking your comments, and Twitter’s system incorrectly tags them as abuse, you won’t have to block them, unless you choose to keep Safety Mode active.

It could be a good option, though a lot depends on how good Twitter’s automated detection process is. 

Twitter would be looking to utilize the same system it’s testing for its new prompts (on iOS) that alert users to potentially offensive language within their tweets.

Twitter offensive tweet check

Twitter’s been testing that option for almost a year, and the language modeling that it’s developed for that process would give it a good base to go on for this new Safety Mode system. 

If Twitter can reliably detect abuse, and stop people from ever having to see it, that could be a good thing, while it could also disincentivize trolls who make such remarks in order to provoke a response. If the risk is that their clever replies could get automatically blocked, and as Twitter notes, will be seen by fewer people as a result, that could make people more cautious about what they say. Which some will see as intrusion on free speech and a violation of some amendment of some kind. But it’s really not. 

If it helps people who are experiencing trolls and abuse, there’s definitely merit to the test.

Twitter hasn’t provided any specific detail, or information on where it’s placed in the development cycle. But it looks likely to get a live test soon, and it’ll be interesting to see what sort of response Twitter sees once the option is made available to users.

Socialmediatoday.com

Advertisement
Advertisement

SOCIAL

UK teen died after ‘negative effects of online content’: coroner

Published

on

Molly Russell was exposed to online material 'that may have influenced her in a negative way'

Molly Russell was exposed to online material ‘that may have influenced her in a negative way’ – Copyright POOL/AFP/File Philip FONG

A 14-year-old British girl died from an act of self harm while suffering from the “negative effects of online content”, a coroner said Friday in a case that shone a spotlight on social media companies.

Molly Russell was “exposed to material that may have influenced her in a negative way and, in addition, what had started as a depression had become a more serious depressive illness,” Andrew Walker ruled at North London Coroner’s Court.

The teenager “died from an act of self-harm while suffering depression”, he said, but added it would not be “safe” to conclude it was suicide.

Some of the content she viewed was “particularly graphic” and “normalised her condition,” said Walker.

Russell, from Harrow in northwest London, died in November 2017, leading her family to set up a campaign highlighting the dangers of social media.

“There are too many others similarly affected right now,” her father Ian Russell said after the ruling.

Advertisement

“At this point, I just want to say however dark it seems, there is always hope.

“I hope that this will be an important step in bringing about much needed change,” he added.

The week-long hearing became heated when the family’s lawyer, Oliver Sanders, took an Instagram executive to task.

A visibly angry Sanders asked Elizabeth Lagone, the head of health and wellbeing at Meta, Instagram’s parent company, why the platform allowed children to use it when it was “allowing people to put potentially harmful content on it”.

“You are not a parent, you are just a business in America. You have no right to do that. The children who are opening these accounts don’t have the capacity to consent to this,” he said.

Lagone apologised after being shown footage, viewed by Russell, that “violated our policies”.

Of the 16,300 posts Russell saved, shared or liked on Instagram in the six-month period before her death, 2,100 related to depression, self-harm or suicide, the inquest heard.

Children’s charity NSPCC said the ruling “must be a turning point”.

Advertisement

“Tech companies must be held accountable when they don’t make children’s safety a priority,” tweeted the charity.

“This must be a turning point,” it added, stressing that any delay to a government bill dealing with online safety “would be inconceivable to parents”.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish