Connect with us

SOCIAL

Twitter Has Updated its Warning Prompts on Potentially Harmful Tweet Replies

Published

on

twitter has updated its warning prompts on potentially harmful tweet replies
7fbcc27a45ceaecb52c2acacf5eea576

After testing them out over the last few months, Twitter has today announced a new update for its warning prompts on tweet replies which it detects may contain offensive comments.

As explained by Twitter:

If someone in the experiment Tweets a reply, our technology scans the text for language we’ve determined may be harmful and may consider how the accounts have interacted previously.”

After it’s initial trials, Twitter has now improved its detection methods for potentially problematic replies, and added more detail to its explanations, which could help users better understand the language that they’re using, and maybe reduce instances of unintended offense.

Advertisement

Of course, some people see this as overstepping the mark – that Twitter’s trying to control what you say, how you say it, free speech, amendment, etc. But it’s really not – the prompts, sparked by previously reported replies, simply aim to help eliminate misinterpretation and offense by asking users to re-assess.

If you’re happy with your tweet, you can still reply as normal. Instagram uses a similar system for its comments. 

As noted in the tweet above, the new process is being tested with selected users on Android, iOS, and on the Twitter website.  

Socialmediatoday.com

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address