Connect with us

SOCIAL

TikTok Moves to Further Limit Potential Exposure to Harmful Content Through Automated Removals

Published

on

TikTok is moving to further empower its automated detection tools for policy violations, with a new process that will see content that it detects as violating its policies on upload removed entirely, ensuring that no one ever sees it.

As TikTok explains, currently, as part of the upload process, all TikTok videos pass through its automated scanning system, which works to identify potential policy violations for further review by a safety team member. A safety team member will then let the user know if a violation has been detected – but at TikTok’s scale, that does leave some room for error, and exposure, before a review is complete.

Now, TikTok’s working to improve this, or at least, ensure that potentially violative material never reaches any viewers.

As explained by TikTok:

“Over the next few weeks, we’ll begin using technology to automatically remove some types of violative content identified upon upload, in addition to removals confirmed by our Safety team. Automation will be reserved for content categories where our technology has the highest degree of accuracy, starting with violations of our policies on minor safety, adult nudity and sexual activities, violent and graphic content, and illegal activities and regulated goods.”

So rather than letting potential violations move through, TikTok’s system will now block them from upload, which could help to limit harmful exposure in the app.

Which, of course, will see some false positives, leading to some creator angst – but TikTok does note that its detection systems have proven highly accurate.

“We’ve found that the false positive rate for automated removals is 5% and requests to appeal a video’s removal have remained consistent. We hope to continue improving our accuracy over time.”

Advertisement

I mean, 5%, at billions of uploads per day, may still be a significant number in raw figures. But still, the risks of exposure are significant, and it makes sense for TikTok to lean further into automated detection at that error rate.

And there’s also another important benefit:

“In addition to improving the overall experience on TikTok, we hope this update also supports resiliency within our Safety team by reducing the volume of distressing videos moderators view and enabling them to spend more time in highly contextual and nuanced areas, such as bullying and harassment, misinformation, and hateful behavior.”

The toll content moderation can take on staff is significant, as has been documented in several investigations, and any steps that can be taken to reduce such is likely worth it.

In addition to this, TikTok’s also rolling out a new display for account violations and reports, in order to improve transparency – and ideally, stop users from pushing the limits.

TikTok policy violations

As you can see here, the new system will display violations accrued by each user, while it will also see new warnings displayed in different areas of the app as reminders of the same.

The penalties for such escalate from these initial warnings to full bans, based on repeated issues, while for more serious issues, like child sexual abuse material, TikTok will automatically remove accounts, while it can also block a device outright to prevent future accounts from being created.

These are important measures, especially given TikTok’s young user base. Internal data published by The New York Times last year showed that around a third of TikTok’s user base is 14 years old or under, which means that there’s a significant risk of exposure for youngsters – either as creators or viewers – within the app.

TikTok has already faced various investigations on this front, including temporary bans in some regions due to its content. Last year, TikTok came under scrutiny in Italy after a ten year-old girl died while trying to replicate a viral trend from the app. 

Advertisement

Cases like this underline the need for TikTok, specifically, to implement more measures to protect users from dangerous exposure, and these new tools should help to combat violations, and stop them from ever being seen.

TikTok also notes that 60% of people who have received a first warning for violating its guidelines have not gone on to have a second violation, which is another vote of confidence in the process.

And while there will be some false positives, the risks far outweigh the potential inconvenience in this respect.

You can read more about TikTok’s new safety updates here

Socialmediatoday.com

Advertisement

SOCIAL

UK teen died after ‘negative effects of online content’: coroner

Published

on

Molly Russell was exposed to online material 'that may have influenced her in a negative way'

Molly Russell was exposed to online material ‘that may have influenced her in a negative way’ – Copyright POOL/AFP/File Philip FONG

A 14-year-old British girl died from an act of self harm while suffering from the “negative effects of online content”, a coroner said Friday in a case that shone a spotlight on social media companies.

Molly Russell was “exposed to material that may have influenced her in a negative way and, in addition, what had started as a depression had become a more serious depressive illness,” Andrew Walker ruled at North London Coroner’s Court.

The teenager “died from an act of self-harm while suffering depression”, he said, but added it would not be “safe” to conclude it was suicide.

Some of the content she viewed was “particularly graphic” and “normalised her condition,” said Walker.

Russell, from Harrow in northwest London, died in November 2017, leading her family to set up a campaign highlighting the dangers of social media.

“There are too many others similarly affected right now,” her father Ian Russell said after the ruling.

Advertisement

“At this point, I just want to say however dark it seems, there is always hope.

“I hope that this will be an important step in bringing about much needed change,” he added.

The week-long hearing became heated when the family’s lawyer, Oliver Sanders, took an Instagram executive to task.

A visibly angry Sanders asked Elizabeth Lagone, the head of health and wellbeing at Meta, Instagram’s parent company, why the platform allowed children to use it when it was “allowing people to put potentially harmful content on it”.

“You are not a parent, you are just a business in America. You have no right to do that. The children who are opening these accounts don’t have the capacity to consent to this,” he said.

Lagone apologised after being shown footage, viewed by Russell, that “violated our policies”.

Of the 16,300 posts Russell saved, shared or liked on Instagram in the six-month period before her death, 2,100 related to depression, self-harm or suicide, the inquest heard.

Children’s charity NSPCC said the ruling “must be a turning point”.

Advertisement

“Tech companies must be held accountable when they don’t make children’s safety a priority,” tweeted the charity.

“This must be a turning point,” it added, stressing that any delay to a government bill dealing with online safety “would be inconceivable to parents”.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish