Connect with us

SOCIAL

Twitter Launches New Bot Labels to Identify Bot Accounts In-Stream

Published

on

Twitter Launches New Bot Labels to Identify Bot Accounts In-Stream

[ad_1]

Twitter is taking the next step in adding more transparency to the tweet process by rolling out its new bot labels, which developers will now be able to voluntarily add to automated accounts.

As you can see in this example, now, bot accounts on Twitter will be displayed with a new robot icon next to the profile name, and a marker denoting that the account is automated.

The same will also be displayed in the tweet feed, with an ‘Automated’ tag beneath the profile name on tweets.

Twitter’s using the voluntary labels as a means to help highlight ‘Good Bots’, as opposed to bot accounts that are used for negative purpose.

As explained by Twitter:

#GoodBots help people stay apprised of useful, entertaining, and relevant information from fun emoji mashups to breaking news. The label will give people on Twitter additional information about the bot and its purpose to help them decide which accounts to follow, engage with, and trust.”

So it’s not designed to be an all-encompassing, bot vs human-controlled identifier at this stage. But it’s a step in that direction – and with bots long being identified as a key problem on the platform, it could, eventually, be a key element in combating misuse. 

Advertisement

Twitter’s been developing the new bot identifier for some time, and launched a live test of the option back in September. Twitter also rolled out an update for developers back in 2020 which made the identification of bot accounts a requirement of using its platform. That, essentially, means that operating a bot account without declaring that it’s a bot is against Twitter’s rules, which gives the platform more direct means to remove accounts which don’t disclose such.

The new bot labels are the next step, which, at least in theory, should mean that all legitimate bot accounts on the platform will now be labeled as such, which will make it easier for users to understand who and what they’re engaging with, and could have a big impact on content distribution and amplification.

Maybe.

While Twitter is implementing new rules around bot usage, that doesn’t mean that those seeking to use bots for negative purpose will adhere to such.

As noted, Twitter bots have repeatedly been identified as a key distributor of misinformation and/or divisive messaging, with people using bots to influence tweet trends and make movements appear more popular than they are.

In the wake of the 2016 US Election, for example, researchers uncovered “huge, inter-connected Twitter bot networks” which had been used to influence political discussion, the largest of which incorporating some 500,000 fake accounts. In 2018, Wired reported that bot profiles often dominated political news streams, with bots contributing up to 60% of tweet activity around some events.

Even more recently, reports have suggested that the Chinese Government has been using social media bots to ‘advance an authoritarian agenda’ via global trends.

These types of operators are not likely to voluntarily disclose themselves via these new labels, so while Twitter’s new bot tags are a good thing for general transparency, they won’t necessarily stop the misuse of bots for such actions.   

Advertisement

But maybe, as noted, they’re another step in the right direction, and in combination with more specific rules on the use of bots, and improved detection processes, Twitter’s now taking more direct steps in addressing its bot problem, and helping users avoid potential manipulation.

It can only help – Twitter’s new bot labels are being rolled out to all self-reported bot accounts from today.



[ad_2]

Source link

SOCIAL

UK teen died after ‘negative effects of online content’: coroner

Published

on

Molly Russell was exposed to online material 'that may have influenced her in a negative way'

Molly Russell was exposed to online material ‘that may have influenced her in a negative way’ – Copyright POOL/AFP/File Philip FONG

A 14-year-old British girl died from an act of self harm while suffering from the “negative effects of online content”, a coroner said Friday in a case that shone a spotlight on social media companies.

Molly Russell was “exposed to material that may have influenced her in a negative way and, in addition, what had started as a depression had become a more serious depressive illness,” Andrew Walker ruled at North London Coroner’s Court.

The teenager “died from an act of self-harm while suffering depression”, he said, but added it would not be “safe” to conclude it was suicide.

Some of the content she viewed was “particularly graphic” and “normalised her condition,” said Walker.

Russell, from Harrow in northwest London, died in November 2017, leading her family to set up a campaign highlighting the dangers of social media.

“There are too many others similarly affected right now,” her father Ian Russell said after the ruling.

Advertisement

“At this point, I just want to say however dark it seems, there is always hope.

“I hope that this will be an important step in bringing about much needed change,” he added.

The week-long hearing became heated when the family’s lawyer, Oliver Sanders, took an Instagram executive to task.

A visibly angry Sanders asked Elizabeth Lagone, the head of health and wellbeing at Meta, Instagram’s parent company, why the platform allowed children to use it when it was “allowing people to put potentially harmful content on it”.

“You are not a parent, you are just a business in America. You have no right to do that. The children who are opening these accounts don’t have the capacity to consent to this,” he said.

Lagone apologised after being shown footage, viewed by Russell, that “violated our policies”.

Of the 16,300 posts Russell saved, shared or liked on Instagram in the six-month period before her death, 2,100 related to depression, self-harm or suicide, the inquest heard.

Children’s charity NSPCC said the ruling “must be a turning point”.

Advertisement

“Tech companies must be held accountable when they don’t make children’s safety a priority,” tweeted the charity.

“This must be a turning point,” it added, stressing that any delay to a government bill dealing with online safety “would be inconceivable to parents”.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish