Connect with us

SOCIAL

X Shares New Data on its Efforts to Combat CSAM in the App

Published

on

X Deactivates Old Media Links Amid Changes to Back-End Elements

Anytime that a company releases a report in the period between Christmas and New Year, when message traction is especially low, it’s going to be received with a level of skepticism from the press.

Which is the case this week, with X’s latest performance update. Amid ongoing concerns about the platform’s revised content moderation approach, which has seen more offensive and harmful posts remain active in the app, prompting more ad partners to halt their X campaigns, the company is now seeking to clarify its efforts on one key area, which Elon Musk himself had made a priority. 

X’s latest update focuses on its efforts to stamp out child sexual abuse material (CSAM), which it claims to have significantly reduced through improved processes over the last 18 months. Third party reports contradict this, but in raw numbers, X is seemingly doing a lot more to detect and address CSAM.

Though the details here are relevant.

First off, X says that it’s suspending a lot more accounts for violating its rules on CSAM.

Advertisement

As per X

“From January to November of 2023, X permanently suspended over 11 million accounts for violations of our CSE policies. For reference, in all of 2022, Twitter suspended 2.3 million accounts.”

So X is actioning more violations, though that would also include wrongful suspensions and responses. Which is still better than doing less, but this, in itself, may not be a great reflection of improvement on this front.

X also says that it’s reporting a lot more CSAM incidents:

“In the first half of 2023, X sent a total of 430,000 reports to the NCMEC CyberTipline. In all of 2022, Twitter sent over 98,000 reports.”

Which is also impressive, but then again, X is also now employing “fully automated” NCMEC reporting, which means that every detected post is no longer subject to manual review. So a lot more content is subsequently being reported. 

Advertisement

Again, you would assume that leads to a better outcome, as more reports should equal less risk. But this figure is also not entirely indicative of effectiveness without data from NCMEC confirming the validity of such reports. So its reporting numbers are rising, but there’s not a heap of insight into broader effective’s of its approaches.

For example, X, at one stage, also claimed to have virtually eliminated CSAM overnight by blocking identified hashtags from use. 

Which is likely what X is referring to here:

“Not only are we detecting more bad actors faster, we’re also building new defenses that proactively reduce the discoverability of posts that contain this type of content. One such measure that we have recently implemented has reduced the number of successful searches for known Child Sexual Abuse Material (CSAM) patterns by over 99% since December 2022.”

That may be true for the identified tags, but experts claim that as soon as X has blacklisted certain tags, CSAM peddlers have just switched to other ones, so while activity on certain searches may have reduced, it’s hard to say that this has also been highly effective.

But the numbers look good, right? It certainly seems like more is being done, and that CSAM is being limited in the app. But without definitive, expanded research, we don’t really know for sure.

Advertisement

And as noted, third party insights suggest that CSAM has become more widely accessible in the app under X’s new rules and processes. Back in February, The New York Times conducted a study to uncover the rate of accessibility of CSAM in the app. It found that content was easy to find, that X was slower to action reports of such than Twitter has been in the past (leaving it active in the app for longer), while X was also failing to adequately report CSAM instance data to relevant agencies (one of agencies in question has since noted that X has improved, largely due to automated reports). Another report from NBC found the same, that despite Musk’s proclamations the he was making CSAM detection a key priority, much of X’s action had been little more than surface level, and had no real effect. The fact that Musk had also cut most of the team that had been responsible for this element had also potentially exacerbated the problem, rather than improved it.

Making matters even worse, X recently reinstated the account of a prominent right wing influencer who’d previously been banned for sharing CSAM content.

Yet, at the same time, Elon and Co. are promoting their action to address CSAM as a key response to brands pulling their X ad spend, as its numbers, in its view at least, show that such concerns are invalid, because it is, in fact, doing more to address this element. But most of those concerns relate more specifically to Musk’s own posts and comments, not to CSAM specifically.

As such, it’s an odd report, shared at at odd time, which seemingly highlights X’s expanding effort, but doesn’t really address all of the related concerns.

And when you also consider that X Corp is actively fighting to block a new law in California which would require social media companies to publicly reveal how they carry out content moderation on their platforms, the full slate of info doesn’t seem to add up.

Essentially, X is saying that it’s doing more, and that its numbers reflect such. But that doesn’t definitively demonstrate that X is doing a better job at limiting the spread of CSAM. 

Advertisement

But theoretically, it should be limiting the flow of CSAM in the app, by taking more action, automated or not, on more posts.

The data certainly suggests that X is making a bigger push on this front, but the effectiveness remains in question.



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address