Connect with us

SOCIAL

TikTok Will Submit Algorithms and Moderation Processes for Review to Ease Concerns Around CCP Meddling

Published

on

TikTok Expands Test of Downvotes for Video Replies, Adds New Prompts to Highlight its Safety Tools

As part of its broader effort to reassure western lawmakers of its independence from Chinese Government meddling, TikTok will now enable US data hosting partner Oracle to regularly review its algorithms and content moderation models, in order to ensure that they are not being manipulated by Chinese authorities.

As reported by Axios, the new process will see TikTok submit its algorithms for assessment by Oracle ‘to ensure that outcomes are in line with expectations and that the models have not been manipulated in any way’.

The reviews will also incorporate audits of TikTok’s content moderation processes, which may help the company avoid further regulatory scrutiny, and calls for bans, amid ongoing concerns from security experts, lawmakers and foreign policy analysts.

TikTok has repeatedly been criticized for alleged CCP interference, including the censorship of certain anti-China topics, the implementation of extreme moderation models and other examples of manipulation of its app.

TikTok has denied all such claims, but with recent reports also showing that more and more young people are now additionally relying on TikTok for discovery, and for news content, the influence of the app is clearly on the rise, which will further escalate tensions around its growth.

Tensions between the US and China remain high after Speaker of the House Nancy Pelosi’s recent trip to Taiwan – which, in China’s view, is not an independent territory. And while the US Government doesn’t officially recognize Taiwan as an independent country, it has repeatedly vowed to help Taiwan protect itself against potential Chinese attacks, which would essentially pit the US against China in a regional conflict.

This is just one example of ongoing concern, which is why many view TikTok as a security risk, because TikTok, via Chinese-owned parent company ByteDance, is subject to China’s tough cybersecurity regulations, which means that the CCP can call on ByteDance to provide US user data, if and when it so chooses.

Advertisement

There’s nothing to suggest that such data has ever been requested by the Chinese Government, but security experts remain highly skeptical of the app, and highly concerned about the risks that it may pose from a surveillance perspective.

Just recently, FCC Commissioner Brendan Carr published an open letter that called on both Apple and Google to remove TikTok from their app stores due to TikTok’s ‘pattern of surreptitious data practices’, which specifically relates to how it shares data with its Chinese parent company.

Last month, Australian cybersecurity firm Internet 2.0 published a new analysis which suggested that TikTok collects ‘excessive’ amounts of user data, including checking device location at least once an hour, continuously requesting access to contacts (even if the user originally denies such), tracking all installed apps, and more.

The ongoing concerns around TikTok have kept the app front of mind with US regulators, and policymakers in other regions, which could eventually lead to further action, if TikTok is unable to counter such with its own insights and reportage.

Which is what it’s hoping to achieve with this new partnership, and with Oracle now also hosting all of TikTok’s US user data, that too could help establish clearer division between the short video app and its Chinese parent company.

Though ultimately, TikTok is owned by a Chinese company, and that company is beholden to CCP regulations.

Whether that becomes a bigger concern, or whether this new process offers enough assurance to calm things down, remains to be seen.



Source link

Advertisement

SOCIAL

UK teen died after ‘negative effects of online content’: coroner

Published

on

Molly Russell was exposed to online material 'that may have influenced her in a negative way'

Molly Russell was exposed to online material ‘that may have influenced her in a negative way’ – Copyright POOL/AFP/File Philip FONG

A 14-year-old British girl died from an act of self harm while suffering from the “negative effects of online content”, a coroner said Friday in a case that shone a spotlight on social media companies.

Molly Russell was “exposed to material that may have influenced her in a negative way and, in addition, what had started as a depression had become a more serious depressive illness,” Andrew Walker ruled at North London Coroner’s Court.

The teenager “died from an act of self-harm while suffering depression”, he said, but added it would not be “safe” to conclude it was suicide.

Some of the content she viewed was “particularly graphic” and “normalised her condition,” said Walker.

Russell, from Harrow in northwest London, died in November 2017, leading her family to set up a campaign highlighting the dangers of social media.

“There are too many others similarly affected right now,” her father Ian Russell said after the ruling.

Advertisement

“At this point, I just want to say however dark it seems, there is always hope.

“I hope that this will be an important step in bringing about much needed change,” he added.

The week-long hearing became heated when the family’s lawyer, Oliver Sanders, took an Instagram executive to task.

A visibly angry Sanders asked Elizabeth Lagone, the head of health and wellbeing at Meta, Instagram’s parent company, why the platform allowed children to use it when it was “allowing people to put potentially harmful content on it”.

“You are not a parent, you are just a business in America. You have no right to do that. The children who are opening these accounts don’t have the capacity to consent to this,” he said.

Lagone apologised after being shown footage, viewed by Russell, that “violated our policies”.

Of the 16,300 posts Russell saved, shared or liked on Instagram in the six-month period before her death, 2,100 related to depression, self-harm or suicide, the inquest heard.

Children’s charity NSPCC said the ruling “must be a turning point”.

Advertisement

“Tech companies must be held accountable when they don’t make children’s safety a priority,” tweeted the charity.

“This must be a turning point,” it added, stressing that any delay to a government bill dealing with online safety “would be inconceivable to parents”.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish