Connect with us


Twitter Reports a Jump in Government Removal Requests in Latest Transparency Report



Twitter Launches New Program to Help Healthcare Providers Stay Up to Date on Latest Industry Announcements


Twitter has published its latest transparency and enforcement update, which outlines all of the accounts and violations it took action on during the period between January 1st and June 30th, 2021, highlighting key trends and shifts in platform usage, and misuse, as it looks to improve the user experience, in alignment with its freedom of speech ethos.

And there are indeed some interesting trend notes – first off, Twitter says that it received a record 43,387 legal demands from governments to remove content in the period, impacting some 196,878 accounts. 

As explained by Twitter:

“Of the total global volume of legal demands, 95% originated from only five countries (in decreasing order): Japan, Russia, Turkey, India, and South Korea. We withheld or required account holders to remove some or all of the reported content in response to 54% of these global legal demands.

Twitter became a key focus of Russian authorities last year, with the platform facing a possible ban at one stage for refusing to comply with requests from the Kremlin to remove content. Twitter eventually did comply with the order, after Russian authorities slowed down the service, but the situation remains tenuous, as Twitter grapples with its principles of facilitating free speech over more restrictive rules in some regions.

Indian authorities have also sought to censor elements of the app, which Twitter has also balked at, leading to conflicts in that region as well, while Japanese officials have also sought removals related to political conflicts.

It’s a challenging element for Twitter, which poses a significant threat to the app’s growth, especially if the platform does end up getting hit with regional bans. Each conflict sees Twitter’s share price drop as a result, and with the app becoming a bigger tool for information sharing and public debate, this will continue to be a source of concern in many respects.


On another front, Twitter also forced account holders to remove 4.7 million Tweets that violated the Twitter Rules in the period, also a record amount.

Twitter Transparency Report

“Of the Tweets removed, 68% received fewer than 100 impressions prior to removal, with an additional 24% receiving between 100 and 1,000 impressions. In total, impressions on these violative Tweets accounted for less than 0.1% of all impressions for all Tweets during that time period.”

As you can see here, Twitter removed the most content for violating its ‘Sensitive Media’ policies, which Twitter says saw an increase in action due to “initiatives launched to bolster operational capacity”. So more moderation staff lead to more content reviews, which has resulted in more offensive material being removed from the app, a good result.

Twitter also says that it permanently suspended 453,754 unique accounts for violations of its child sexual exploitation policy, with 89% of them being proactively identified through industry hash sharing. Twitter also suspended 44,974 unique accounts for the promotion of terrorism and violent organizations.

Twitter additionally reports that the US became the single largest source of government information requests in the period, with 3,026 requests.

Twitter Transparency Report

“These requests accounted for 27% of all accounts specified from around the world, and Twitter complied, in whole or in part, with 68% of these US information requests.

With former US President Donald Trump being banned from the platform, and various officials under investigation for their conduct in relation to the Capitol Riots in January last year, it makes sense that more information was being sought in relation to such activity in the app.

It’s an interesting snapshot of Twitter’s enforcement actions, and key trends that could impact the app moving forward. The most significant is likely the ongoing conflicts with governments over potential censorship, and the removal of tweeted content at their behest – and what happens if Twitter refuses such. Again, Twitter wants to ensure that it holds true to its free speech fundamentals, but as the app becomes a bigger target for influence operations, and is seen by more politicians as a means to sway voters, it’ll likely continue to come under pressure on this front, which will put Twitter management in a difficult position, in holding to its principles while also managing shareholder expectations.

That seems problematic, especially if it does end up facing bans as a result. Twitter’s presence in India, for example, is 24.5 million, its third-biggest user market, and as we’ve seen with TikTok, Indian authorities will ban a social app on political grounds, if it sees fit.

Twitter, the business, would struggle to take a blow of the magnitude. But could it deal with the potential user fall-out of operating at the bidding of local authorities?


It seems that this could become a bigger point of consternation at some stage, as government removal requests continue to flow in.


Source link


UK teen died after ‘negative effects of online content’: coroner



Molly Russell was exposed to online material 'that may have influenced her in a negative way'

Molly Russell was exposed to online material ‘that may have influenced her in a negative way’ – Copyright POOL/AFP/File Philip FONG

A 14-year-old British girl died from an act of self harm while suffering from the “negative effects of online content”, a coroner said Friday in a case that shone a spotlight on social media companies.

Molly Russell was “exposed to material that may have influenced her in a negative way and, in addition, what had started as a depression had become a more serious depressive illness,” Andrew Walker ruled at North London Coroner’s Court.

The teenager “died from an act of self-harm while suffering depression”, he said, but added it would not be “safe” to conclude it was suicide.

Some of the content she viewed was “particularly graphic” and “normalised her condition,” said Walker.

Russell, from Harrow in northwest London, died in November 2017, leading her family to set up a campaign highlighting the dangers of social media.

“There are too many others similarly affected right now,” her father Ian Russell said after the ruling.


“At this point, I just want to say however dark it seems, there is always hope.

“I hope that this will be an important step in bringing about much needed change,” he added.

The week-long hearing became heated when the family’s lawyer, Oliver Sanders, took an Instagram executive to task.

A visibly angry Sanders asked Elizabeth Lagone, the head of health and wellbeing at Meta, Instagram’s parent company, why the platform allowed children to use it when it was “allowing people to put potentially harmful content on it”.

“You are not a parent, you are just a business in America. You have no right to do that. The children who are opening these accounts don’t have the capacity to consent to this,” he said.

Lagone apologised after being shown footage, viewed by Russell, that “violated our policies”.

Of the 16,300 posts Russell saved, shared or liked on Instagram in the six-month period before her death, 2,100 related to depression, self-harm or suicide, the inquest heard.

Children’s charity NSPCC said the ruling “must be a turning point”.


“Tech companies must be held accountable when they don’t make children’s safety a priority,” tweeted the charity.

“This must be a turning point,” it added, stressing that any delay to a government bill dealing with online safety “would be inconceivable to parents”.

Source link

Continue Reading

Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address